Prove that/Explain how for independent random variables $X_i$, we have $f_i(X_i)$ are independent (in particular without measure theory)
I have seen a lot of posts that describe the case for just 2 random variables.
Independent random variables and function of them
Are functions of independent variables also independent?
If $X$ and $Y$ are independent then $f(X)$ and $g(Y)$ are also independent.
If $X$ and $Y$ are independent. How about $X^2$ and $Y$? And how about $f(X)$ and $g(Y)$?
Are squares of independent random variables independent?
Prove that if $X$ and $Y$ are independent, then $h(X)$ and $g(Y)$ are independent in BASIC probability -- can we use double integration? (oh I actually asked the 2 variable elementary case here, but there's no answer)
I have yet to see a post that describes the case for at least 3.
Please answer in 2 situations
1 - for advanced probability theory:
Let $X_i: \Omega \to \mathbb R$ be independent random variables in $(\Omega, \mathscr F, \mathbb P)$. Let $i \in I$ for any index set I think (or maybe has to be countable). Of course, assume $card(I) \ge 3$. Then show $f_i(X_i)$ are independent. Give conditions on $f_i$ such that $f_i(X_i)$ is independent. I read in above posts that the condition is 'measurable', which I guess means $\mathscr F$- measurable, but I could've sworn that I read before that the condition is supposed to be 'bounded and Borel-measurable', as in bounded and $\mathscr B(\mathbb R)$-measurable for $(\mathbb R, \mathscr B(\mathbb R), Lebesgue)$
2 - for elementary probability theory
Let $X_i: \Omega \to \mathbb R$ be independent random variables that have pdf's. Use the elementary probability definition of independence that is 'independent if the joint pdf splits up', or something. I guess the index set $I$ need not be finite, in which case I think the definition is that the joint pdf of any finite subset of is independent. Give conditions on $f_i$ such that $f_i(X_i)$ is independent. Of course we can't exactly say that $f_i$ is 'measurable'.
-
Context for the elementary case: I'm trying to justify the computation for the formula for the moment-generating function for linear combination of independent random variables. See here: Proving inequality of probabilty to derive upper bound for moment-generating functions
-
Based on the application of Riemann–Stieltjes integral (or Lebesgue–Stieltjes integral) to probability, I think the condition is any $f_i$ such that $E[f_i(X_i)]$ exists (i.e. $E[|f_i(X_i)|]$ is finite).
-
This is the same condition in Larsen and Marx - Introduction to Mathematical Statistics and Its Applications.
-
I think $f$ bounded implies this but not conversely.
-
-
Update: Also related through another question If $g$ is a continuous and increasing function of $x$, prove that $g(X)$ is a random variable. --> More generally for what functions $g$ is $g(X)$ is a random variable? Of course in advanced probability just say $g$ is Borel-measurable or $\mathscr F$-measurable or whatever, but I think in elementary probability we say $g$ such that $E[g(X)]$ exists i.e. $E[|g(X)|] < \infty$, EVEN THOUGH this is, I believe, a stronger condition than that $g$ is 'measurable', whatever this means in elementary probability. But then again this is kind of weird since we don't even necessarily expect $E[X]$ to exist (i.e. $E[|X|] < \infty$) or well any higher moment $E[X^n]$ I guess.
For $i\in I$ let $\sigma\left(X_{i}\right)\subseteq\mathscr{F}$ denote the $\sigma$-algebra generated by random variable $X_{i}:\Omega\to\mathbb{R}$.
Then actually we have $\sigma\left(X_{i}\right)=X_{i}^{-1}\left(\mathscr{B}\left(\mathbb{R}\right)\right)=\left\{ X_{i}^{-1}\left(B\right)\mid B\in\mathscr{B}\left(\mathbb{R}\right)\right\} $.
The collection $(X_i)_{i\in I}$ of random variables is independent iff:
For every finite $J\subseteq I$ and every collection $\left\{ A_{i}\mid i\in J\right\} $ satisfying $\forall i\in J\left[A_{i}\in\sigma\left(X_{i}\right)\right]$ we have:
$$P\left(\bigcap_{i\in J}A_{i}\right)=\prod_{i\in J}P\left(A_{i}\right)\tag {1}$$
Now if $f_{i}:\mathbb{R}\to Y_{i}$ for $i\in I$ where $\left(Y_{i},\mathcal{A}_{i}\right)$ denotes a measurable space and where every $f_{i}$ is Borel-measurable in the sense that $f_{i}^{-1}\left(\mathcal{A}_{i}\right)\subseteq\mathscr{B}\left(\mathbb{R}\right)$ then for checking independence we must look at the $\sigma$-algebras $\sigma\left(f_{i}\left(X_{i}\right)\right)$.
But evidently: $$\sigma\left(f_{i}\left(X_{i}\right)\right)=\left(f_{i}\circ X_{i}\right)^{-1}\left(\mathcal{A}_{i}\right)=X_{i}^{-1}\left(f_{i}^{-1}\left(\mathcal{A}_{i}\right)\right)\subseteq X_{i}^{-1}\left(\mathscr{B}\left(\mathbb{R}\right)\right)=\sigma\left(X_{i}\right)$$ So if $\left(1.A\right)$ is satisfied for the $\sigma\left(X_{i}\right)$ then automatically it is satisfied for the smaller $\sigma\left(f_{i}\left(X_{i}\right)\right)$.
2)
The concept independence of random variables has impact on PDF's and calculation of moments, but its definition stands completely loose from it. Based on e.g. a split up of PDF's it can be deduced that there is independence but things like that must not be promoted to the status of "definition of independence". In situations like that we can at most say that it is a sufficient (not necessary) condition for independence. If we wonder: "what is needed for the $f_i(X_i)$ to be independent?" then we must focus on the definition of independence (not sufficient conditions). Doing so we find that measurability of the $f_i$ is enough whenever the $X_i$ are independent already.
BCLC edit: (let drhab edit this part further): There's no 'measurable' in elementary probability, so we just say 'suitable' or 'well-behaved' in that whatever functions that students of elementary probability will encounter, we hope that they are suitable. Probably, some textbooks will use weaker conditions than 'measurable' that will be used as the definition of independence for that book.
Edit: Functions that are not measurable (or not suitable, if you like) are in usual context very rare. The axiom of choice is needed to prove the existence of such functions. In that sense you could say that constructible functions (no arbitrary choice function is needed) are suitable.
measure-theoretic:
The measure-theoretic answer is extremely general. It requires nothing special about the real line or Borel sets, just pure measurability. Suppose $(X)_{i \in I}$ is a family (countable is not needed) of random elements, where $X_i: (\Omega, \mathscr{F}) \to (A_i, \mathscr{A}_i)$, i.e. each $X_i$ takes values in some space $A_i$ and $X_i$ is measurable, but all $X_i$ live on the same input space $\Omega$. No assumptions are made about the spaces $\Omega, A_i$ or $\sigma$-algebras $\mathscr{F}, \mathscr{A}_i$.
Let a corresponding family of functions $(f_i)_{i \in I}$ be given such that for each $i$, $f_i: (A_i, \mathscr{A}_i) \to (B_i, \mathscr{B}_i)$ is measurable. That is, each $f_i$ accepts inputs from $A_i$ (the codomain of $X_i$) and takes values in some space $B_i$ such that $f_i$ is measurable. (This ensures that for each $i$, $f_i(X_i): (\Omega, \mathscr{F}) \to (B_i, \mathscr{B}_i)$ makes sense and is measureable.) Again, no assumptions are made about the spaces $B_i$ or $\sigma$-algebras $\mathscr{B}_i$.
Now suppose $(X_i)_i$ is an independent family under some probability measure $P$ on $(\Omega, \mathscr{F})$, i.e. that for any finite subset $J \subseteq I$ of indices and any measurable subsets $U_i \in \mathscr{A}_i$ one has $$P(X_i \in U_i \text{ for all } i \in J) = \prod_{i \in J} P(X_i \in U_i).$$
Then we claim that $(f_i(X_i))_{i \in I}$ is also an independent family under $P$. Indeed, let $J \subseteq I$ be some finite subset of indices and let measurable subsets $V_i \in \mathscr{B}_i$ be given. For each $i \in J$, by the measurability of $f_i$ and $V_i$, one has that $f_i^{-1}(V_i) \in \mathscr{A}_i$ and thus $$ P(f_i(X_i) \in V_i \text{ for all } i \in J) = P(X_i \in f^{-1}_i(V_i) \text{ for all } i \in J) $$ $$ = \prod_{i \in J} P(X_i \in f^{-1}_i(V_i)) $$ $$ = \prod_{i \in J} P(f_i(X_i) \in V_i). $$ Thus, $f_i(X_i))_{i \in I}$ is an independent family.
elementary probability:
As for the elementary probability solution, it really depends on what your definition of independence is. In all cases, the definition only involves finite subsets of the random variables. I would say that without the definition of a $\sigma$-algebra, the proof is out of grasp unless you make extra (unnecessary) assumptions. If your definition is that densities split as a product, then you must assume some conditions to ensure that $f_i(X_i)$ has a density and that you can apply the usual density transformation rules. If your functions take values in a countable space, the above proof can be repeated essentially verbatim replacing arbitrary $U_i, V_i$ with singletons, i.e. look at $P(f_i(X_i) = y_i, \forall i)$.
Alternatively, since you are avoiding a measure-theoretic answer to a question whose very definition is measure-theoretic, perhaps correctness of the argument is not a requirement? Just tell your students the independence condition must hold for "all sets (verbal asteristk)" and then give the above proof without mentioning the measurability. Or if your students are perhaps more comfortable with topology, you could use only continuous functions and look at preimages of open sets.