We draw independently six numbers ($x_1,x_2,x_3,x_4,x_4,x_5,x_6$) from a same distribution with expected value $m$ and variance $\sigma ^2$.
Next, we create a matrix
$\begin{bmatrix} x_1&x_2&x_3\\x_2&x_4&x_5\\x_3&x_5&x_6\end{bmatrix}$
I have to calculate the expected value of its determinant as a function of $m$ and $\sigma^2$.
So, I don't have idea how I can start.
Begin by expand your determinant using the rule of Sarrus (https://en.wikipedia.org/wiki/Rule_of_Sarrus)
$$D=\det \begin{bmatrix} x_1&x_2&x_3\\x_2&x_4&x_5\\x_3&x_5&x_6\end{bmatrix}$$
$$=x_1x_4x_6+2x_2x_3x_5-x_4x_3^2-x_1x_5^2-x_6x_2^2\tag{1}$$
Due to independence,
$$\text{if} \ p \neq q : \ \ \ E(x_px_q)=E(x_p)E(x_q)\tag{2}$$
Because operator $E()$ is linear, we can say that the expected value of (1) is :
$$E(D)=E(x_1)E(x_4)E(x_6)+2E(x_2)E(x_3)E(x_5)-E(x_4)E(x_3^2)-E(x_1)E(x_5^2)-E(x_6)E(x_2^2)$$
$$=m^3+2m^3-mE(x_3^2)-mE(x_5^2)-mE(x_2^2).\tag{3}$$
Now you must remember that (2) is not true if $p=q$. We must use the fact that :
$$E(x_p^2)=E(x_p)^2+\sigma^2=m^2+\sigma^2\tag{4},$$ and it remains to replace (4) into (2) to obtain :
$$E(D)=-3m\sigma^2.$$
I have simulated the case where all the $x_p$s are drawn from a uniform distribution $U[0,1]$, with mean $m=\frac12$ and variance $\sigma^2=\tfrac{1}{12}$ (https://en.wikipedia.org/wiki/Uniform_distribution_(continuous)). The experimental variance one obtains is very close to $-\tfrac{1}{8} $ with a funny witch's hat histogram giving a good idea of the underlying pdf with a left trail a little heavier than its right counterpart, explaining the (slightly) negative mean :
It is not uninteresting to see that the extreme values for $D$ look to be in this case $-2$ and $2$ (I have no proof), attained with the following determinants:
$$D=\det \begin{bmatrix} 1&1&0\\1&0&1\\0&1&1\end{bmatrix}=-2 \ \ \text{and} \ \ D=\det \begin{bmatrix}0&1&1\\1&0&1\\1&1&0\end{bmatrix}=2.$$