I understand how the weighted average ($\bar{x}$) of a sample of uncorrelated and identically distributed variables $\{x_i\}$ is given by
$$\bar{x}=\frac{\sum_i w_ix_i}{\sum_i w_i} \tag{1}$$
where $w_i$ is the weight of the variable $x_i$, given precisely by the inverse of the variable's variance $\sigma_i^2$.
$$w_i=\frac{1}{\sigma_i^2} \tag{2}$$
But what I can't understand is why when deriving the weighted variance for the entire set of random variables, everybody seems to naively overlook the fact that the weights are themselves functions of the variables $\{x_i\}$. I'll show you what I mean by going through the typical derivation.
We know that the error in a function of randomly distributed and uncorrelated variables $f(x_i)$ is given by:
$$\left(\Delta f(x_i)\right)^2=\sum_i\left(\frac{\partial f}{\partial x_i}\right)^2(\Delta x_i)^2 \tag{3}$$
Letting $f=\bar{x}$, eqn. (3) becomes:
$$\begin{align} \sigma^2&=(\Delta \bar{x})^2\\ &=\sum_i\left(\frac{\partial \bar{x}}{\partial x_i}\right)^2\sigma_i^2 ~~~~~~~~~~~~~~~~\leftarrow\textrm{(this line)} \tag{4}\\ &=\sum_i\left(\frac{w_i}{\sum_j w_j}\right)^2\frac{1}{w_i}\\ &=\frac{1}{(\sum_jw_j)^2}\sum_iw_i\\ &=\left(\sum_iw_i\right)^{-1}\\ \end{align}$$
I do not understand (4) above (I have clearly indicated it). Why isn't the correct equation given by the following [(5)]?
$$\sigma^2=\sum_i\left[\left(\frac{\partial \bar{x}}{\partial x_i}\right)^2\sigma_i^2 + \left(\frac{\partial \bar{x}}{\partial w_i}\right)^2(\Delta w_i)^2\right] \tag{5}$$