Let $X=(X_1, X_2)$ be a two-dimensional continuous random vector with the following density:
$$f(x_1,x_2) = \begin{cases}\frac{3}{2}x_1^2x_2 + \frac{1}{2}(x_2 - \frac{1}{2}) - \frac{3}{4} x_1^2 & \text{ if } x_1 \in [0,1] \text{ and } x_2 \in [1,2] \\ 0 & \text{ else }\end{cases}$$
How can one calculate the expected value and variance for $X_1$ and $X_2$?
As far as I understand, the expected value is
$$E(X) = (E(X_1),...E(X_p))^T$$
Usually they write $\mu = E(X)$ and the components of $\mu^T = (\mu_1,...,\mu_n) $ correspond to the univariate expected values $\mu_i = E(X_i)$.
The expected value of $X$ should then be a matrix of componentwise random variables, i.e.
$$E(X) = (E(X_{ij}))$$
The issue is that I don't know how exactly one has to use the density function $\frac{3}{2}x_1^2x_2 + \frac{1}{2}(x_2 - \frac{1}{2}) - \frac{3}{4} x_1^2 $ to calculate the expected value.
Any help is appreciated.
You can get the marginal distribution of $X_1$ by marginalizing out $X_2$: $f(x_1)=\int_1^2 f(x_1, x_2)dx_2=\int_1^2 [\text{insert your formula here}]dx_2$. Then $E(X_1)=\int_0^1 f(x_1)dx_1$. The same can be done for $X_2$ by marginalizing out $X_1$.
Since we have the pdfs of $X_1$ and $X_2$ you can now calculate the formula for variance by standard techniques.
The covariance is given by $Cov(X_1, X_2)=E[(X_1-E(X_1)(X_2-E(X_2))]$ or $E(X_1X_2)-E(X_1)E(X_2)$.
Then you can summarize your results:
$$E(X_1, X_2)=(E(X_1), E(X_2))\\ Var(X_1, X_2)=\begin{pmatrix}Var(X_1)&Cov(X_1, X_2)\\ Cov(X_1, X_2)&Var(X_2)\end{pmatrix}$$