Some authors use different equals signs for different purposes. For the most part, they are "$=$", "$\equiv$", "$:=$", and "$:\equiv$'. I read that "$=$" dates back to 1557, and is of mathematical origin.
What about the others? I have read that "$:=$" appears in some programming languages from the $70$'s. But, does it occur earlier? What about $\equiv$ and $:\equiv$, what are their origins?
The material here is extracted from Florian Cajori's book A history of mathematical notations.
If there are any mistakes, it is most likely caused by my own misunderstanding.
The symbol $\equiv$ has been used by various branches of mathematics. It has been used for
arithmetic as congruence - first introduced by Gauss in $1801$.
Ref: an online reference and paragraph $408$ of Cajori's book.
geometry as geometric congruence - first introduced by Riemann.
It appears in G.F.B.Riemann's Elliptische Funklionen (Leipzig 1899), p 1, 6.
Ref: paragraph $374$ of Cajori's book.
logic - can be dated back at least to $1910$.
It appears as 'definitional identity' in E.H.Moore, Introduction to a Form of General Analysis (1910) p.18. In first volume of Whitehead and Russell's Principia Mathematica (1910, p5-38), it has been used as bi-conditional (i.e $p \equiv q$ stands for $p$ implies $q$ and $q$ implies $p$) instead.
Ref: paragraphs $694, 695$ of Cajori's book.
Gottlob Ferge may have used $\equiv$ before in $1879$ (he switched to use $=$ later in his publication in $1893$), the reference I have is not clear what happens.
Ref: paragraph $687$ of Cajori's book.