Determine the PDF of $Z = XY$ when the joint pdf of $X$ and $Y$ is given
Solution 1:
There are faster methods, but it can be a good idea, at least once or twice, to calculate the cumulative distribution function, and then differentiate to find the density.
The upside of doing it that way is that one can retain reasonably good control over what's happening. (There are also a number of downsides!)
So we want $F_Z(z)$, the probability that $Z\le z$. Lets do the easy bits first. It is clear that $F_Z(z)=0$ if $Z\le z$. And it is almost as clear that $F_Z(z)=1$ if $z\ge 1$. So from now on we suppose that $0\lt z\lt 1$.
Draw a picture of our square. For fixed $z$ between $0$ and $1$, draw the first quadrant part of the curve with equation $xy=z$.
This curve is a rectangular hyperbola, with the positive $x$ and $y$ axes as asymptotes. We want the probability that $(X,Y)$ lands in the part of our square which is on the "origin side" of the hyperbola.
So we need to integrate our joint density function over this region. There is some messiness in evaluating this integral: we need to break up the integral at $x=z$. We get
$$F_Z(z)= \Pr(Z\le z)=\int_{x=0}^z \left(\int_{y=0}^1 (2-2x)\,dy\right)\,dx + \int_{x=z}^1 \left(\int_{y=0}^{z/x} (2-2x)\,dy\right)\,dx. $$ Not difficult after that. We get, I think, $F_Z(z)=z^2-2z\ln z$.
Differentiate for the density.
Solution 2:
An alternative to the route PDF $\to$ CDF $\to$ PDF (rather unnatural but often advocated on this site, for reasons which escape me) is to go directly PDF $\to$ PDF, thanks to our first Grand Universal Principle:
GUP n°1 Let the theorem of the change of variables do the job and be clever for you.
The method is so automatized that one can nearly apply it while sleeping. To wit, your hypothesis is that, for every bounded measurable function $\varphi$, $$ \mathbb E(\varphi(X,Y))=\iint\varphi(x,y)p_{X,Y}(x,y)\mathrm dx\mathrm dy. $$ The conclusion you want to reach is that there exists a function $q$, to be determined, such that, for every bounded measurable function $\psi$, $$ \mathbb E(\psi(XY))=\int\psi(z)q(z)\mathrm dz. $$ The second identity is a restricted case of the first one, hence one asks that, for every bounded measurable function $\psi$, $$ \int\psi(z)q(z)\mathrm dz=\iint\psi(xy)p_{X,Y}(x,y)\mathrm dx\mathrm dy. $$ The next step is obvious: use the change of variable $(x,y)\to(z,t)$ where $z=xy$ and $t=$ nearly anything else, for example $t=x$. Then $\mathrm dz\mathrm dt=x\mathrm dx\mathrm dy$, that is, $\mathrm dx\mathrm dy=\mathrm dz\mathrm dt/t$, hence $$ \iint\psi(xy)p_{X,Y}(x,y)\mathrm dx\mathrm dy=\iint\psi(z)p_{X,Y}(t,z/t)\mathrm dz\mathrm dt/t. $$ By identification, the two formulas coincide if and only if $$ q(z)=\int p_{X,Y}(t,z/t)\mathrm dt/t. $$ This formula is entirely general. The function $\psi$ was a dummy variable, meant to disappear.
In the case at hand, the last thing that can go awry is if one writes down incorrectly the density $p_{X,Y}$. Here, the advice is summarized as our second Grand Universal Principle:
GUP n°2 Use indicator functions to include the conditions on the arguments in a unique expression of the PDF.
To wit, $$ p_{X,Y}(x,y)=2(1-x)\,\mathbf 1_{0\leqslant x,y\leqslant 1}, $$ hence $$ q(z)=\int 2(1-t)\,\mathbf 1_{0\leqslant t\leqslant1}\,\mathbf 1_{0\leqslant z/t\leqslant 1}\mathrm dt/t=2\mathbf 1_{0\leqslant z\leqslant1}\int_z^1(1-t)\mathrm dt/t. $$ Thus, the density of $XY$ is the function $q$ defined by $$ q(z)=2(z-1-\log z)\,\mathbf 1_{0\leqslant z\leqslant1}. $$