I have familiar with the fact that for $N$ events, we have the concepts of pairwise independent and mutually independent. From this I am interested in extensions of this notion to random variables. More explicitly, my question asks the following:
In Probability Theory, do we have the same two concepts for $N$ random variables (instead of events) - ie. pairwise independent vs. mutually independent?
If we have these two concepts, then I have a follow up question on the Central Limit Theorem (CLT). The CLT (in its simplest form) talks about $N$ IID (independent identically distributed) random variables.
What is meant by "independent [...] random variables" in the context of the CLT? - pairwise independent or mutually independent?
This then leads to the question of what happens if we take the weaker requirement of $N$ pairwise independent random variables, does the conclusion of the theorem still hold true? Clearly, if we assume mutual independence, then since pairwise independence is implied, the conclusion must follow.
Let's start off with what it means for $2$ random variables to be independent. This will allow us to build our way up to pairwise independence of random variables. From there, we can extend this notion to mutual independence (sometimes called joint independence).
In reality the above is not really a definition. It is more of a criterion for the independence of random variables (which is defined in terms of generated sigma algebras), but for the purposes of this answer the above can be treated as a definition.
The concept of mutual (joint) independence extends the idea of our first definition to multiple random variables:
Now looking at the Central Limit Theorem (CLT), you correctly mention that we need the random variables to be independent and identically distributed. Unless otherwise stated, when we use the word "independent" for multiple random variables, this is shorthand for "mutually (jointly) independent".
If we have pairwise independence as opposed to mutual independence (as you suggest in your question), then this is not sufficient for the CLT to hold without additional assumptions. There are many proofs of the Central Limit Theorem, but in the standard proof, one of the very first steps we use is decomposing the characteristic function into the product of multiple characteristic functions. This is a property of mutual independence which does not hold in the pairwise case.
Note: an interesting property of mutual independence is that any subfamily of these random variables must also be mutually independent. Therefore, any collection of mutually independent random variables will automatically also satisfy pairwise independence (however, the converse does not hold in general).