"If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck"
We have the 3 probabilities:
$P(\text{look}) = 80\%$
$P(\text{swim}) = 70\%$
$P(\text{quack}) = 90\%$
I would assume then that $P(\text{duck}) = P(\text{look})·P(\text{swim})·P(\text{quack})$ $= 0.8 · 0.7 · 0.9$ $= 0.504$
This is obviously dead wrong. Common sense expects the resulted probability close to $1$. What would be right operation among the individual probabilities to represent the original saying correctly?
Update:
Giving some context: We have a problem in an industrial environment, where we have to validate if our calculated values are true or not. We can conduct a few independent measurements, each addresses a different aspect of the same setup and each measurment returns a yes or no with a probability. Then we have to summarize them and compare to the theoretical calculations. What we do is duck typing.
Update 2:
While reading up the answers did hit me: the more independent measurement events we do on properties of the duck, should prove or disprove the "duckness" of the object. Should we have 6 independent measurements (look, swim, quack, fly, eat, walk), each around probability of 0.1, our calculation should conclude that this is certainly not a duck. Should we get confirming measurements (higher than 0.5), I expect higher "duckness" probability.
The updates to the question imply that what you really want to do is classify an object based on independent measurements of the object. The solution to that problem is known as the Naive Bayes classifier. The math is similar to heropup's answer but without the constraint $P(L ~|~ D) = 1$. Instead we start by writing Bayes' theorem in a convenient way:
$$ \frac{P(D ~|~ L)}{P(\neg D ~|~ L)} = \frac{P(D)}{P(\neg D)} \frac{P(L ~|~ D)}{P(L ~|~ \neg D)} $$
This extends easily to multiple independent measurements:
$$ \frac{P(D ~|~ L,S,Q)}{P(\neg D ~|~ L,S,Q)} = \frac{P(D)}{P(\neg D)} \frac{P(L ~|~ D)}{P(L ~|~ \neg D)} \frac{P(S ~|~ D)}{P(S ~|~ \neg D)} \frac{P(Q ~|~ D)}{P(Q ~|~ \neg D)} $$
Now use the first formula to simplify the second:
$$ \frac{P(D ~|~ L,S,Q)}{P(\neg D ~|~ L,S,Q)} = \left(\frac{P(D)}{P(\neg D)}\right)^{-2} \frac{P(D ~|~ L)}{P(\neg D ~|~ L)} \frac{P(D ~|~ S)}{P(\neg D ~|~ S)} \frac{P(D ~|~ Q)}{P(\neg D ~|~ Q)} $$
Plugging in $P(D ~|~ L) = 0.8$ and so on gives:
$$ \frac{P(D ~|~ L,S,Q)}{P(\neg D ~|~ L,S,Q)} = \left(\frac{P(D)}{P(\neg D)}\right)^{-2} \frac{0.8}{1-0.8} \frac{0.7}{1-0.7} \frac{0.9}{1-0.9} $$
Even with the independence assumption, we cannot answer the question without $P(D)$. For simplicity, I will take $P(D) = 0.5$, giving
$$ \frac{P(D ~|~ L,S,Q)}{P(\neg D ~|~ L,S,Q)} = 84 \\ P(D ~|~ L,S,Q) = \frac{84}{84 + 1} = 0.9882 $$
This approach has the intuitive quality that measurements with probability >0.5 will increase your confidence in $D$, measurements with probability <0.5 will decrease your confidence in $D$, and measurements with probability exactly 0.5 will have no effect (since this is the same as the prior probability $P(D)$ that I assumed).