The famous Slater's condition states that if a convex optimization problem has a feasible point $x_0$ in the relative interior of the problem domain and every inequality constraint $f_i(x) \le 0$ is strict at $x_0$, i.e. $f_i(x_0) < 0$, then strong duality holds for the problem. I know how to prove this.
The weak Slater's condition is same as its strong form, but it only requires every non-affine inequality constraint $f_i(x) \le 0$ is strict at $x_0$. The weak condition also implies strong duality. But I don't know how to prove it.
Notice that Slater's condition does not assume that $A$ is full row rank, this is done only in the proof to simplify it (to get a contradiction at the end of the proof). Hence you can convert the affine constraints to the form $Cx\leq 0$, then add a slack variable $s\geq 0$ and get $Cx=-s$. In the same way the proof has a rank assumption on $A$ you can assume the same on the concatenation $(C|A)$. More importantly, notice that all the conditions together eventually imply that the de facto relative interior is not empty, this is the real meaning of the differentiation between affine and non-affine constraints. Recall that the constraints eventually define "true" feasible region, and splitting that region between a set $D$ and other constraints is a notation difference only (for convenience purposes).
At the end of the day, Slater's condition means that the "true" feasible set is convex. Because if the intersection of the set $D$ with a non-affine constraint is of the form $f_i(x)=0$, we have a non convex de facto set. If the same intersection between $D$ and an affine constraint is of the form $f_j(x)=0$, we still have a convex feasible region. And since the proof of strong duality relies on strict separation we must have a feasible convex set with non-empty interior.