I'm not sure if this violates site policies, but I'd like to ask this question through a small tought experiment.
Imagine we have a blank-slate meta-machine capable of understanding natural language in the same manner humans do as well. Think of it as a small child on steroids.
What would be the necessary topics we would have to model within this machine, and the order in which we would have to model them, so that we would endow it with the capability to perform elementary arithmetic and understand elementary algebra.
Indeed, mathematicians are currently doing this --- they're formalizing large chunks of mathematics in "proof assistants": computer programs which can verify proofs. Examples of such tools include Coq and LEAN
You can take a look at mathematical components, a project which formalised the Fiet-Thompson theorem from group theory (and more results) in Coq. They have an interactive graph of concepts that they have formalized.
There is also a recent formalization of a perfectoid space in LEAN. This concept was used in Peter Schloze's work --- that's all I know about them. There is a nice diagram at the link which shows the vast amount of mathematics they had to formalize to get to this point.
Inside these projects, you can find formalizations of elementary arithmetic, usually starting from the Peano axioms for natural numbers.