I will use a specific example, but I mean in general. I went to a number theory conference and I saw one thing that surprised me: Nearly half the talks began with "Assuming the generalized Riemann Hypothesis..." Almost always, the crux of their argument depended on this conjecture.
Why would mathematicians perform research assuming a conjecture? By definition, it is not known to be true yet. In the off-chance that it turns out to be false, wouldn't all of the papers that assumed the conjecture be invalidated? I may be answering my own question, but I speculate that:
There is such strong evidence in support of the particular conjecture (Riemann Hypothesis in particular) and lack of evidence against it, that it is "safe" to assume it.
It's not so much about result obtained, but the methods and techniques used to prove it. Perhaps by assuming the conjecture, in the case of the Riemann Hypothesis, it leads to development of new techniques in analytic number theory.
The results would not be invalidated but would be rendered vacuous, i.e. true but no longer informative.
A result says If the Riemann hypothesis is true, then blah blah blah mumbo jumbo.
If the Riemann hypothesis ultimately is seen to be false, then it is still true that if the Riemann hypothesis is true, then blah blah blah mumbo jumbo.
"Are all cell phones in the classroom turned off?", asks the instructor. If it happens that there are no cell phones in the classroom, then the correct answer is "yes". That's "vacuous truth". This is one example showing how the concept of vacuous truth can be quite practical. The "yes" answer would no longer be informative if it were learned that no cell phones are in the classroom. And if no cell phones are in the classroom, it is quite probable that no one even knows that. You would only know that you don't have a cell phone; you wouldn't know that about all your classmates.