I am implementing a maximum likelihood method (the EM algorithm) for which I'm using Broyden's method at each iteration. Here is the formula:
$\Delta A = \frac{(\Delta \theta - A \Delta\tilde{g})\Delta\theta^{T}A}{\Delta\theta^{T}A\Delta\tilde{g}} $
$A$ is a scalar or a matrix, so the denominator is a scalar unless $\Delta \theta$ and $\Delta \tilde{g}$ are matrices. In this case, I don't know what to do. Does anyone have experience with this form of Broyden's method? Do you take the norm of the denominator to avoid dividing by a matrix?
From what I understand, $\Delta\theta$ and $\Delta\tilde g$ are column vectors with $n$ entries (so they are $n\times 1$ matrices) and $A$ is an $n\times n$ matrix. Therefore, $\Delta\theta^T A$ is the product of a $1\times n$ matrix with a $n\times n$ matrix (notice the transpose on the $\Delta\theta$), so that the result is a $1\times n$ matrix, also known as a row vector. From this, we see the denominator of your expression for $\Delta A$ is of the form $$(\Delta\theta^T A)\Delta \tilde g \sim (1\times n)*(n\times 1)$$ where $\sim$ is used to indicate the dimensions of the matrices. The result is therefore $1\times 1$: a scalar.