I have an iterative matrix equation that takes the form: $$\vec{f}^{(s+1)} = \matrix{M}^{(s)}.\vec{d}$$ Where $\matrix{M}^{(s)}$ is a function of $\vec{f}^{(s)}$ and a response matrix $\matrix{R}$, and $\vec{d}$ is my raw data vector. A bit of research on wikipedia suggests that if the only thing that had uncertainties was $\vec{d}$ then I could simply calculate the covariance matrix for $\vec{f}^{(s+1)}$ using the following equation. $$\matrix{V}_{f^{(s+1)}} = \matrix{M}^{(s)}.\matrix{V}_d.{\matrix{M}^{(s)}}^T$$
Where $\matrix{V}_d$ is the covariance matrix of $\vec{d}$.
However, I also have uncertainties in the response matrix, $\matrix{R}$, that is used to calculate $\matrix{M}^{(s)}$. Thus my question:
How do I propagate my uncertainties through a linear formula where both the matrix and vector have uncertainties? Are there methods other than brute force Taylor expansion to do this?
Edit: I have seen a method to propagate the errors, however it relies on the invertability of $\matrix{M}^{(s)}$ and merely gives magnitudes of errors across the whole matrix. Sadly, my matrix is not square, ($\matrix{M}^{(s)}$ is (1500x3200)), and I need the actual covariance matrix, not a general percentage error applying to the whole matrix.
Assuming that you can find the covariance for the elements of the matrices $M^{(s)}$ from $R$, I think a little progress can be made. We have $$ f_i = \sum_j M_{ij} d_j $$ with expectation value $$ \mathrm{E}[f_i] = \sum_j \mathrm{E}[M_{ij} d_j] = \sum_j \mathrm{E}[M_{ij}] \mathrm{E}[d_j] + \sum_j \mathrm{Cov}[M_{ij}, d_j] $$ and covariance matrix $$ \mathrm{Cov}[f_i, f_j] = \sum_{k,l} \mathrm{Cov}[M_{ik} d_k, M_{jl} d_l] \;. $$ I do not know of a more explicit expression for the covariance of the products of random variables than that given in this article, which does not seem to make life much easier here. However, it mentions an approximation for the covariance, $$ \begin{multline} \mathrm{Cov}[f_i, f_j] \approx \sum_{k,l} \big\{ \mathrm{E}[M_{ik}] \, \mathrm{E}[M_{jl}] \, \mathrm{Cov}[d_k, d_l] + \mathrm{E}[M_{ik}] \, \mathrm{E}[d_l] \, \mathrm{Cov}[d_k, M_{jl}]\\ + \mathrm{E}[d_k] \, \mathrm{E}[M_{jl}] \, \mathrm{Cov}[M_{ik}, d_l] + \mathrm{E}[d_k] \, \mathrm{E}[d_l] \, \mathrm{Cov}[M_{ik}, M_{jl}] \big\} \;. \end{multline} $$ Due to the iterative nature, there is going to be correlation between $M^{(s)}$ and $\vec d$, which depends on the function that computes the matrix. Now, if you can reasonably ignore that covariance, then you are left with $$ \mathrm{Cov}[f_i, f_j] \approx \sum_{k,l} \big\{ \mathrm{E}[M_{ik}] \, \mathrm{E}[M_{jl}] \, \mathrm{Cov}[d_k, d_l] + \mathrm{E}[d_k] \, \mathrm{E}[d_l] \, \mathrm{Cov}[M_{ik}, M_{jl}] \big\} \;, $$ which looks almost manageable.