Consider $x\in (-\infty, \infty)$, $0\leq d_i<1$, $\mu_i\in \mathbb{R}$ and $\sigma_i\geq 0$ for $i=1,...,s$, $\sum_{i=1}^s d_i=1$.
Assume $$ \sum_{i=1}^s d_i\frac{1}{1+e^{\frac{-(x-\mu_i)}{\sigma_i}}}=0 $$
Statement: if $(\mu_i,\sigma_i)_{i=1}^s$ are distinct across $i$ and $\sigma_i>0$ $\forall i=1,...s$, then $d_i=0$ $\forall i=1,...s$.
I do not understand this statement. What is see is that the equation above can be true iff $d_i=0$ $\forall i=1,...s$ or $x=-\infty$. Can you help me to see the rationale behind the statement? What am I doing wrong?
Notice that $x=-\infty$ is not a real number.
The statement that you wrote is the following:
IF
THEN the only possibility is $d_i=0$.
This statement as stated is complete non-sense, because if $\sum_{i=1}^s d_i=1$ then it is impossible that $d_i=0$ for all $i=1,\ldots,s$.
Given that one of your tags says algebra-precalculus, I will assume that you can answer this using very elementary properties of the reals. Probably, the easiest similar statement is the following:
IF
THEN, the only possibility is $d_i=0$ for all $i=1,\ldots,s$.
This statement can be easily prove if you have in mind that $e^y>0$ for any real number $y$. Therefore, by doing $\alpha_i=e^{\frac{-(x-\mu_i)}{\sigma_i}}$ for all $i=1,\ldots,s$, the equation that you have is a sum $$\displaystyle{\sum_{i=1}^s d_i\cdot \alpha_i=0}$$ with $S=\alpha_i>0$ and $d_i\geq 0$.
If $d_i>0$ for some $i$, then $S\geq \alpha_i\cdot d_i>0$ because the product of two positive number is positive. But that would contradict the hypothesis that $S=0$.
Therefore, the only option is $d_i=0$ for all $i=1,2,\ldots,s$.