Definition of consistency is: A set of formulas ⊆ WFF is consistent iff there is no A ∈ WFF such that Σ ⊢ A and Σ ⊢ (¬A).
Say you have a set of propositions statements (i.e. $A \lor B \rightarrow C$, etc.) and let's call this "Set1".
I know that to prove that set is inconsistent, you find an "A" such that Set1 ⊢ A and Set1 ⊢ (¬A).
But what are the steps to prove that the set is consistent?
The most useful technique for proving $\Sigma$ consistent is to provide a model for it -- that is, an interpretation of the primitive symbols in it that makes every formula in $\Sigma$ true. If the logic is sound, it cannot conclude something false from true, so whenever $\Sigma\vdash A$, then $A$ will also be true in that model. Since $A$ and $\neg A$ cannot both be true in the same interpretation, this means that $\Sigma$ is consistent.
If the logic is complete then every consistent $\Sigma$ will have a model. Usual propositional logic and first-order logic are both complete.
In the case of propositional logic, an interpretation is simply a table that shows which propositional letters we take to be true and which ones we take to be false. If $\Sigma$ contains finitely many different propositional letters, we can in principle try all interpretations until we find one that is a model.
The situation is less nice for first-order logic. It is still true that every consistent $\Sigma$ has a model, but the model may need to contain infinitely many elements for the quantifiers to range over, and is not necessarily easy to describe. Even if we have a description for a model, proving that it is actually a model can be very difficult -- in some sense it can be exactly as difficult as proving the $\Sigma$ is consistent.
And due to Gödel's incompleteness theorems, we know that are first-order theories that are consistent but cannot be proved to be consistent (from any given meta-system we wish to conduct this proof in).