Let $A$ be an $n \times n$ matrix with all nonnegative entries and row sums strictly less than one, let $V$ be an $n \times n$ nonnegative diagonal matrix satisfying $V \leq I$ (entrywise), let $B\equiv\left(I-AV\right)^{-1}$ and $B^{*}\equiv\left(I-A\right)^{-1}$, let $X$ be a vector in the $n$-dimensional simplex (i.e., $x_j \geq 0,\sum_j^n x_j=1$), let $D_1$ and $D_2$ be two strictly positive diagonal $n \times n$ matrices, and finally let $$\tilde{M}\equiv\left(\mathrm{diag}\left\{ B^{T}X\right\} \right)^{-1}B^{T}\left[V\mathrm{diag}\left\{ X\right\} +\left(I-V\right)\mathrm{diag}\left\{ B^{T}X\right\} \right]D_{1}BD_2\mathrm{diag}\left(\iota-A\iota\right).$$ I want to show that the spectral radius of $\tilde{M}$ is less than or equal to one, $\rho(\tilde{M})\leq 1$, provided that the following condition holds $$\tag{1} D_{1} B^{*} D_{2} \left(I-A\right) \iota\leq\iota.$$
It is useful to note the connection between this question and two questions that have already been solved. First, in the special case with $D_2 = I$, the problem above simplifies to showing that $\rho(M)\leq 1,$ where $$M\equiv\left(\mathrm{diag}\left\{ B^{T}X\right\} \right)^{-1}B^{T}\left[V\mathrm{diag}\left\{ X\right\} +\left(I-V\right)\mathrm{diag}\left\{ B^{T}X\right\} \right] B\mathrm{diag}\left(\iota-A\iota\right).$$ This is question Bounding spectral radius of special matrix which has already been solved, see https://math.stackexchange.com/a/4402778/165163. Second, in the special case in which $D_1 = \eta I$ and $D_2 = \gamma \mathrm{diag}(e_i)$, with $\eta,\gamma$ nonnegative constants and $e_i$ the vector with zeros everywhere except for a 1 in position $i$, then $$\rho(\tilde{M}) = \tilde{M}_{ii}=\frac{\sum_{l}\left(v_{l}x_{l}+\left(1-v_{l}\right)\sum_{k}b_{kl}x_{k}\right)b_{li}^{2}\eta\left(1-s_{i}\right)\gamma_{i}}{\sum_{r}b_{ri}x_{r}}.$$ Since both the numerator and numerator are linear in $x$, it is enough to consider this expression at corners, $X=e_j$. Thus, we need to show that $$\tilde{M}_{ii}^{(j)}=v_{j}b_{ji}\eta\left(1-s_{i}\right)\gamma_{i}+\frac{\sum_{l}\left(1-v_{l}\right)b_{jl}b_{li}^{2}\eta\left(1-s_{i}\right)\gamma_{i}}{b_{ji}}\leq1.$$ Given our assumptions on $D_{1}$ and $D_{2}$, condition (1) collapses to $\eta b_{ji}^{*}\left(1-s_{i}\right)\gamma_{i}\leq$ for all $j$. Since $b_{ii}^{*}\geq b_{ji}^{*},\forall j,$ then we would need to show that $$v_{j}b_{ji}^{2}+\sum_{l}\left(1-v_{l}\right)b_{jl}b_{li}^{2}\leq b_{ji} b_{ii}^{*},$$ as formulated in this question Inequality involving matrix inverse elements, which has already been solved (see here). To some extent, the challenge now is to somehow combine the ideas in the solutions to these two special cases to solve the more general problem postulated here.
Finally, as in the case with $D_2 = I$ discussed in Bounding spectral radius of special matrix, two simple cases are illustrative. First, if $V = I$ then condition (1) implies that $\tilde{M}\iota \leq \iota$ and so $\rho(\tilde{M}) \leq 1$. Second, if $A$ is diagonal then $\tilde{M}$ would be diagonal and so we would just need to show that each diagonal element is lower than one. But each of diagonal element of $\tilde{M}$ would be of the form $$d_1d_2\left(v+\frac{1-v}{1-av}\right)\frac{1-a}{1-av},$$ while (1) implies that $d_1d_2\leq 1$, so it is enough to show that $$\left(v+\frac{1-v}{1-av}\right)\frac{1-a}{1-av}\leq 1.$$ Simple algebra shows this to be true.
As Konstantin's answer, this solution is an extension of my solution to the linked question and also uses ideas of his answer.
As in my linked answer, the spectral radius of $\bar M$ is at most $1$ if and only if the matrix $$K=B^T[VD_X+(I-V)D_B]D_1B-D_BD_2^{-1}D_A^{-1}$$ is negative semidefinite, where $D_B=\newcommand{\diag}{\mbox{ diag}}\diag(B^TX)$, $D_A=\diag(\iota-s)$ with $s=A\iota$ and $D_X=\diag(X)$. Observe that as in the linked answer, we must require that $B^TX$ has positive components for $\bar M$ to be defined. In the statement for $K$ this is no longer necessary.
Now if $K$ is negative semidefinite then so is the matrix $\tilde K$ obtained by replacing $D_1$ by a matrix $\tilde D_1\leq D_1$ elementwise. Therefore we can assume from now on that we have equality in condition (1) of the question, that is $$\tag{C}D_1B^*D_2D_A\iota=\iota.$$ Observe that this is possible because $D_2$ and $D_A$ have positive diagonal elements, hence so has $D_2D_A\iota$ and finally $B^*D_2D_A\iota$.
Since $D_X$ and $D_B$ are linear functions of $X$ and $X$ has nonnegative elements, if suffices to show that $K$ is negative semidefinite for the unit vectors $X=e_j$, $j=1,\ldots,n$. Therefore we have to show that the matrices $$L_j=B^T[VE_j+(I-V)F_j]D_1B-F_jD_2^{-1}D_A^{-1}$$ are negative semidefinite, where $E_j=\diag(e_j)$ and $F_j=\diag(B^Te_j)=\diag(b_{j1},\ldots,b_{jn})$. Fixing an arbitrary vector $w$, we have to show that $z_j:=w^TL_jw\leq0$ for all $j$.
As in the answer linked answer, it is sufficient for this to show that $$\tag I \sum_ja_{rj}v_jz_j\geq z_r\mbox{ for all }r,$$ because this implies that the vector $z$ of the $z_r$ can be expressed as $z=-\sum_n (AV)^n g$ with the vector $g$ of the nonnegative differences $g_r=\sum_j a_{rj}v_jz_j - z_r.$
For a proof of (I) (i.e. the nonnegativity of the components of $g$), we rewrite $z_j$ using the vector $u=Bw$. We have $$z_j=w^TL_jw=u^TVE_jD_1 u+u^T(I-V)F_jD_1u-w^TF_jD_2^{-1}D_A^{-1}w\\ =v_jd_{1j}u_j^2+\sum_m u_m^2(1-v_m)b_{jm}d_{1m}-\sum_m \frac{w_m^2b_{jm}}{d_{2m}(1-s_m)}$$ where $d_{1k}$ are the diagonal elements of $D_1$ and $d_{2k}$ those of $D_2$. Recall that $s_r=(A\iota)_r=\sum_\ell a_{r\ell}\in [0,1[.$ Using that $B=AVB+I$ implies $\sum_j a_{rj}v_jb_{jn}=b_{rn}-\delta_{rn}$ where $\delta_{rn}=1$ if $r=n$ and $=0$ otherwise, a calculation as in the linked answer shows that $$g_r=\sum_j a_{rj}d_{1j}v_j^2u_j^2-d_{1r}u_r^2+\frac{w_r^2}{d_{2r}(1-s_r)}.$$
For a proof of the nonnegativity of all $g_r$, we assume that not all $a_{rj}$, $j=1,\ldots,n$, are equal to zero. The exceptional case that they all vanish is treated as in the linked answer and left to the reader.
We again use the Cauchy-Schwarz inequality, $\left(\sum_{j}x_{j}y_{j}\right)^{2}\leq\left(\sum_{j}x_{j}^{2}\right)\left(\sum_{j}y_{j}^{2}\right)$, with $x_{j}=\sqrt{a_{rj}d_{1j}^{-1}}$ and $y_{j}=\sqrt{a_{rj}d_{1j}v_{j}^{2}u_{j}^{2}}$ to show that $$\tag{II}\left(\sum_j a_{rj}v_ju_j\right)^2\leq \left(\sum_ja_{rj}d_{1j}^{-1}\right) \left(\sum_j a_{rj}d_{1j}v_j^2u_j^2\right)=:S_r\sum_j a_{rj}d_{1j}v_j^2u_j^2.$$ At this point, we need to express $S_r$ (which is positive) using condition (C). We have $B^*D_2D_A\iota=D_1^{-1}\iota$ and hence $D_2D_A\iota=(I-A)D_1^{-1}\iota$. Therefore $AD_{1}^{-1}\iota=D_1^{-1}\iota-D_2D_A\iota$. The $r$-th component of this vector equation is $$S_r=\sum_j a_{rj}d_{1j}^{-1}=d_{1r}^{-1}-d_{2r}(1-s_r).$$ Observe that the positivity of $S_r$ implies that $d_{1r}^{-1}>d_{2r}(1-s_r)$.
Recalling again from the linked answer that $$\sum_j a_{rj}v_ju_j=\sum_{j,n} a_{rj}v_jb_{jn}w_n=\sum_n (b_{rn}-\delta_{rn})w_n=u_r-w_r,$$ we obtain with (II) that $$g_r\geq \frac1 {S_r}(u_r-w_r)^2-d_{1r}u_r^2+\frac{w_r^2}{d_{2r}(1-s_r)}=\frac{d_{1r}}{1-p_r}(u_r-w_r)^2-d_{1r}u_r^2+\frac{d_{1r}w_r^2}{p_r},$$ where $p_r=d_{1r}d_{2r}(1-s_r)\in ]0,1[$. Rewriting this last expression, the proof is complete: $$g_r\geq \frac{d_{1r}}{p_r(1-p_r)}(p_ru_r-w_r)^2\geq0.$$