I have seen a lot of posts that describe the case for just 2 random variables.
Independent random variables and function of them
Are functions of independent variables also independent?
If $X$ and $Y$ are independent then $f(X)$ and $g(Y)$ are also independent.
If $X$ and $Y$ are independent. How about $X^2$ and $Y$? And how about $f(X)$ and $g(Y)$?
Are squares of independent random variables independent?
Prove that if $X$ and $Y$ are independent, then $h(X)$ and $g(Y)$ are independent in BASIC probability -- can we use double integration? (oh I actually asked the 2 variable elementary case here, but there's no answer)
I have yet to see a post that describes the case for at least 3.
Please answer in 2 situations
1 - for advanced probability theory:
Let $X_i: \Omega \to \mathbb R$ be independent random variables in $(\Omega, \mathscr F, \mathbb P)$. Let $i \in I$ for any index set I think (or maybe has to be countable). Of course, assume $card(I) \ge 3$. Then show $f_i(X_i)$ are independent. Give conditions on $f_i$ such that $f_i(X_i)$ is independent. I read in above posts that the condition is 'measurable', which I guess means $\mathscr F$- measurable, but I could've sworn that I read before that the condition is supposed to be 'bounded and Borel-measurable', as in bounded and $\mathscr B(\mathbb R)$-measurable for $(\mathbb R, \mathscr B(\mathbb R), Lebesgue)$
2 - for elementary probability theory
Let $X_i: \Omega \to \mathbb R$ be independent random variables that have pdf's. Use the elementary probability definition of independence that is 'independent if the joint pdf splits up', or something. I guess the index set $I$ need not be finite, in which case I think the definition is that the joint pdf of any finite subset of is independent. Give conditions on $f_i$ such that $f_i(X_i)$ is independent. Of course we can't exactly say that $f_i$ is 'measurable'.
Context for the elementary case: I'm trying to justify the computation for the formula for the moment-generating function for linear combination of independent random variables. See here: Proving inequality of probabilty to derive upper bound for moment-generating functions
Based on the application of Riemann–Stieltjes integral (or Lebesgue–Stieltjes integral) to probability, I think the condition is any $f_i$ such that $E[f_i(X_i)]$ exists (i.e. $E[|f_i(X_i)|]$ is finite).
This is the same condition in Larsen and Marx - Introduction to Mathematical Statistics and Its Applications.
I think $f$ bounded implies this but not conversely.
Update: Also related through another question If $g$ is a continuous and increasing function of $x$, prove that $g(X)$ is a random variable. --> More generally for what functions $g$ is $g(X)$ is a random variable? Of course in advanced probability just say $g$ is Borel-measurable or $\mathscr F$-measurable or whatever, but I think in elementary probability we say $g$ such that $E[g(X)]$ exists i.e. $E[|g(X)|] < \infty$, EVEN THOUGH this is, I believe, a stronger condition than that $g$ is 'measurable', whatever this means in elementary probability. But then again this is kind of weird since we don't even necessarily expect $E[X]$ to exist (i.e. $E[|X|] < \infty$) or well any higher moment $E[X^n]$ I guess.
For $i\in I$ let $\sigma\left(X_{i}\right)\subseteq\mathscr{F}$ denote the $\sigma$-algebra generated by random variable $X_{i}:\Omega\to\mathbb{R}$.
Then actually we have $\sigma\left(X_{i}\right)=X_{i}^{-1}\left(\mathscr{B}\left(\mathbb{R}\right)\right)=\left\{ X_{i}^{-1}\left(B\right)\mid B\in\mathscr{B}\left(\mathbb{R}\right)\right\} $.
The collection $(X_i)_{i\in I}$ of random variables is independent iff:
For every finite $J\subseteq I$ and every collection $\left\{ A_{i}\mid i\in J\right\} $ satisfying $\forall i\in J\left[A_{i}\in\sigma\left(X_{i}\right)\right]$ we have:
$$P\left(\bigcap_{i\in J}A_{i}\right)=\prod_{i\in J}P\left(A_{i}\right)\tag {1}$$
Now if $f_{i}:\mathbb{R}\to Y_{i}$ for $i\in I$ where $\left(Y_{i},\mathcal{A}_{i}\right)$ denotes a measurable space and where every $f_{i}$ is Borel-measurable in the sense that $f_{i}^{-1}\left(\mathcal{A}_{i}\right)\subseteq\mathscr{B}\left(\mathbb{R}\right)$ then for checking independence we must look at the $\sigma$-algebras $\sigma\left(f_{i}\left(X_{i}\right)\right)$.
But evidently: $$\sigma\left(f_{i}\left(X_{i}\right)\right)=\left(f_{i}\circ X_{i}\right)^{-1}\left(\mathcal{A}_{i}\right)=X_{i}^{-1}\left(f_{i}^{-1}\left(\mathcal{A}_{i}\right)\right)\subseteq X_{i}^{-1}\left(\mathscr{B}\left(\mathbb{R}\right)\right)=\sigma\left(X_{i}\right)$$ So if $\left(1\right)$ is satisfied for the $\sigma\left(X_{i}\right)$ then automatically it is satisfied for the smaller $\sigma\left(f_{i}\left(X_{i}\right)\right)$.
2)
The concept independence of random variables has impact on PDF's and calculation of moments, but its definition stands completely loose from it. Based on e.g. a split up of PDF's it can be deduced that there is independence but things like that must not be promoted to the status of "definition of independence". In situations like that we can at most say that it is a sufficient (not necessary) condition for independence. If we wonder: "what is needed for the $f_i(X_i)$ to be independent?" then we must focus on the definition of independence (not sufficient conditions). Doing so we find that measurability of the $f_i$ is enough whenever the $X_i$ are independent already.
BCLC edit: (let drhab edit this part further): There's no 'measurable' in elementary probability, so we just say 'suitable' or 'well-behaved' in that whatever functions that students of elementary probability will encounter, we hope that they are suitable. Probably, some textbooks will use weaker conditions than 'measurable' that will be used as the definition of independence for that book.
Edit: Functions that are not measurable (or not suitable, if you like) are in usual context very rare. The axiom of choice is needed to prove the existence of such functions. In that sense you could say that constructible functions (no arbitrary choice function is needed) are suitable.