Any help super appreciated on this.
Consider a set of functions $\mathcal{F}$ that is some subset of $L_2$ and two bounded linear operators $A$ and $B$. The inverse $B^{-1}$ is well defined on $\mathcal{F}$ and is bounded above by a constant $c$. A generalized inverse of the composition $(BA)^{-1}$ is defined on a dense subset of $\mathcal{F}$ (generalized in that the composition needn't be injective, it just has dense range in $\mathcal{F})$, specifically is it well defined on the dense subset on which $A^{-1}$ ($A$ has dense range) is well-defined. While the inverse of the composition is not well defined on the whole space we can consider some regularized (e.g. Tikhonov) inverse with regularization parameter $\epsilon$ denoted $(BA)^{\epsilon}$. Now, for the dense subset on which $(BA)^{-1}$ is well defined it is clear that the linear operator $A(BA)^{\epsilon}-B^{-1}$ converges to zero as $\epsilon$ goes to zero. My question is, does this then have to be true for the whole space? And if not is there some way of characterizing the subset of $\mathcal{F}$ for which this is true (it will sometimes be true even outside of the set on which $(BA)^{-1}$ is well defined)?
Thanks!
Think I have figured this out under the assumption that $||I-B||<1$. My reasoning is as follows:
By the assumption of dense range and the definition of a regularized inverse I know that, letting $\gamma_\epsilon=(BA)^\epsilon Bf$ we have $||BA \gamma _\epsilon - Bf||\to 0$ Now consider that by the triangle inequality $||A \gamma_\epsilon -f||\leq ||BA \gamma _\epsilon - Bf|| + ||I-B||||A \gamma_\epsilon -f||$ and hence: $(1-||I-B||)||A \gamma_\epsilon -f||\leq ||BA \gamma _\epsilon - Bf||$ and so we see that $||I-B||<1$ is sufficient for $||A \gamma_\epsilon -f||\to 0$