I've tried to prove that given N random variables $$x_1,…,x_N$$
the expectation of their transformation F (linearized to first order around the input variables' mean values)
$$ y = F(x_1,…,x_N) $$
equals
$$ η_y = F(η_{x_1},…,η_{x_N}) $$
My doubt is: do the random variables need to be independent? If yes, do you have an intuitive explanation for why in case of dependent random variables the previous relationship is wrong?
Here my proof. I guess that the red contribution can be neglected only if the joint probability density function can be factorized into the marginal distributions. Do you agree with me? I do not have an intuitive explanation for that.

If $Y$ is defined by linearizing $F(X_1,\ldots,X_N)$ around the mean values $(\eta_{X_1},\ldots,\eta_{X_N})$, then we are saying $$ Y := F(\eta_{X_1},\ldots,\eta_{X_N})+\sum_{i=1}^N\left[\frac{\partial F}{\partial x_i}(\eta_{X_1},\ldots,\eta_{X_N})\right](X_i-\eta_{X_i}).\tag1$$ To compute the expectation of $Y$, the key is to observe that $F(\eta_{X_1},\ldots,\eta_{X_N})$ is constant, and so are the quantities $\frac{\partial F}{\partial x_i}(\eta_{X_1},\ldots,\eta_{X_N})$ for each $i=1,\ldots N$. Therefore the expectation of the expression (1) is $$E(Y)=F(\eta_{X_1},\ldots,\eta_{X_N})+\sum_{i=1}^N\left[\frac{\partial F}{\partial x_i}(\eta_{X_1},\ldots,\eta_{X_N})\right]E(X_i-\eta_{X_i}),\tag2$$ by linearity of the expectation operator, regardless of the joint distribution of $(X_1,\ldots,X_N)$. Finally the quantity $E(X_i-\eta_{X_i})$ equals zero by definition of $\eta_{X_i}$.
There's no need to involve the joint densities when computing the expectation of a sum. Linearity of expectation is valid for any collection of integrable random variables. You can see a proof that $E(X+Y)=E(X)+ E(Y)$ when $(X,Y)$ have joint density here; notice that independence is not required.