I know the term "Algorithm" is used in "Computer science" and "Computer programming" as any set of operations done in an ordered way to solve a problem. But this definition seems not a rigorous mathematical definition.
Is there any rigorous definition of the concept of "Algorithm" in the scope of "Pure mathematics"?
As pointed in the comments, the best formalization of what is an algorithm uses the theory of Turing machines.
This definition is surprisingly robust, in the sense that for all other sensible attempts at formalizing the notion of an algorithm (perhaps the other famous one is the model of lambda calculus) we have found the resulting model to be equivalent to Turing's, in the sense that every function from the natural numbers to the natural numbers computed in the model can be computed by a properly designed Turing machine.
In fact, the evidence is so strong that computer scientists conjecture that every possible sensible model of computation will be reducible to Turing's model. This is known as the Church-Turing thesis.
As requested per the OP, here is a possible formalization of Turing machines in a set theory friendly presentation.
A Turing machine is a function from the set $Q\times \Sigma$ to the set $Q \times \Omega$, where $Q$ is a finite set of states with two special states $START$ and $HALT$ that designate initialization and ending of the computation, $\Sigma$ is a finite set of symbols (typically $1$ and $0$) and $\Omega$ is a set of possible actions that the TM may take (typically writing a $1$, writing a $0$, moving left and moving right).
Thus in each step the machine reads the contents of the cell it is in, and according to its internal state and the reading changes its state and does an action.
Now, the Turing machine starts over a tape with an input of finitely many ones and infinitely many 0's (or other symbols depending on your choice of formalism). If we specify a codification for inputs and outputs, then we can define "running" the algorithm as starting the TM over a valid input in the $START$ state, and applying repeatedly the transition table we have defined and performing the indicated actions over the tape until we reach the $HALT$ state. Then we can decodify the contents of the tape in that moment as the output of the algorithm.
Of course, this formalism is quite useful to reason about TMs, but if we want to embed them in Set Theory we will have to step up our game.
We can construct the TMs as functions which transform an set representing the state of the machine, its position and the infinite tape into new states, positions and tapes, subject to a series of conditions.
Under this interpretation, the TM is a function $Q\times\mathbb{Z}\times T \to Q\times\mathbb{Z}\times T$, where $Q$ are the states, the second member of the tuple is the position and $T$ is the infinite tape, itself a function from $\mathbb{Z}$ to $\Sigma$.
To be a proper TM, some restrictions have to be imposed, such that you can only modify the cell you are in, there must be special start and end states, etc. $Q$ and $\Sigma$ can be seen as finite sets of numbers.
This is a powerful enough formalization to talk about TMs in the language of set theory.
If you are interested in a beautiful and thorough treatment of computability theory and its embedding in FOL, I suggest reading Computability and logic, by Boolos et al.