When defining recursive functions, are the following two statements equivalent?$$f:\mathbb{N}^n\rightarrow\mathbb{N}^m, g:\mathbb{N}^m\rightarrow\mathbb{N}^k \text{ recursive}\implies g\circ f \text{ recursive}$$ and $$f_1:\mathbb{N}^n\rightarrow\mathbb{N},...,f_m:\mathbb{N}^n\rightarrow\mathbb{N} \text{ and }g:\mathbb{N}^m\rightarrow \mathbb{N} \text{ recursive}\implies g(f_1,...,f_m) \text{ recursive.} $$
The first statement is a lemma in my notes, while the second is used in the definition. Nothing is said about the respective arities of the functions, for the first statement it only says 'composition of recursive functions is recursive', so I wrote that $g$ maps to $\mathbb{N}^k$. Is it possible to replace the second statement with the first in the definition and to obtain the second one as a consequence?
Another question: What is the name of the function $(f_1,...,f_m):\mathbb{N}^n\rightarrow\mathbb{N}$, when $f_1,...,f_m$ are already given? I read a text where $g(f_1,...,f_n)$ was called 'composition' as well, but I only know composition as pointwise composition for two functions. Is there another name for said function, and the kind of composition resulting from it being composed with another function?
I take "binary composition" to mean what is written on the second line of the question.
The answer is then Yes. This is because a sequence of functions $f_1, \ldots, f_m \colon \mathbb{N}^m \to \mathbb{N}$ are all computable if and only if the function $ f\colon \mathbb{N}^m \to \mathbb{N}^m$ such that $$ f(x_1, \ldots, x_m) = (f_1(x_1,\ldots, x_m),\ldots,f_m(x_1, \ldots, x_m))$$ is computable, and in any case for this $f$ we have $g \circ f = g(f_1, \ldots, f_m)$.
Clearly, if $f$ is computable, then so are each of its coordinate functions. Conversely, if $f_1,\ldots, f_m$ are all computable, I can combine their programs together to make a single function that returns an element of $\mathbb{N}^m$.
Really, the use of multiple inputs for a computable function is just a matter of convention. The entire notion of "computability" would be essentially unchanged if we only look at computable functions of one variable. The only difference that that, when we want to compute a function that "naively" takes two or more variables, we have to encode the input correctly. For example, instead of computing $$ g(x,y) = x + y $$ we could compute $$ h(z) = \begin{cases} x + y & \text{if } z = 2^x3^y \\ 0 & \text{otherwise} \end{cases}$$ We would just have to send $h$ the number $2^x3^y$ whenever we wanted to compute $g(x,y)$. The same thing could be done for any computable function $g$ of any number of variables. As long as we are willing to prepare the input correctly, there is no need for the function to take more than one natural number input. Once we make this switch, the entire notion of "generalized" composition disappears.
The reason that is not generally done is so that computability theory will resemble mathematical practice more closely, and to avoid having to talk about pairing operations on natural numbers so early in the theory. But there is nothing "more" about computability that comes from having multiple inputs, they just make the syntax look nicer in some ways.
This is very similar to the fact that we don't really gain any new amount of computability by adding additional input or output tapes to a Turing machine; anything that can be computed by a Turing machine with multiple input and output tapes can be computed by a Turing machine with one of each, as along as we prepare the input correctly and interpret the output correctly.