In Friendly Introduction to Mathematical Logic, Leary states that one of the axioms of arithmetic$N$ is:
$(\forall x) xE0=S0$.
which informally says that $x^0=1$ for every $x\in\mathbb{N}$. This is true for every non-zero natural number but as his natural numbers contains zero then we have to exclude zero explicitly but he doesn't do that. Is that a mistake or there is something that I've missed?
If there is nothing I'm missing, will this alter the following exposition? I've checked the errata, nothing is mentioned about that.
Your fundamental mistake is the assumption that $0^0$ shouldn't be defined. This is a common error.
First, when dealing in first order logic, it is not possible to leave things undefined.
Second, in the natural numbers, there are a lot of reasons to actually define $0^0=1$. And, indeed, even in an arbitrary ring we implicitly define $0^0=1$ when we write polynomials as $\sum_{i=0}^n a_ix^i$ and then evaluate the value at $x=0$ as $a_0$. It is useful to define $0^0=1$.
In lambda calculus and set theory, there are really trivial way to define $m^n$, which yields $1$ whenever $n=0$. It is much more work to define an expression with $0^0$ being undefined/NaN.
Intuitively, why would the product an empty sequence of $0$s be any different than the product of an empty sequence of $1$s? The sequences are exactly the same thing. Indeed, we usually define any empty product (or sum) to be the identity element for that product (or sum.) This is also intuitively why $0!=1$ - it is an empty product.
And finally, there is just no contradiction reached by defining $0^0=1$. I dare you to find a contradiction.
The main time we might want to leave $0^0$ undefined is when we are defining $x^y$ for $x,y$ being continuous values. Even then, there is no real reason to avoid defining $0^0=1$, as long as you realize that the definition yields a discontinuous function. (If you are a strict constructivist, this might be a problem - discontinuous functions aren't possible in constructivist math - but even constructivists don't have a problem with $0^0=1$ in the natural numbers.)
In calculus, when learning about limits, we say that $0^0$ is an "indeterminate form." This does not mean undefined. The word "form" is important in that phrase.
For example, if $\lim_{x\to a} f(x)=\lim_{x\to a} g(x)=0$ for positive functions $f(x),g(x)$, then we have no way of generally determining the value of:
$$\lim_{x\to a} f(x)^{g(x)}$$ because the "form" we get when replacing $f(x)$ and $g(x)$ with their limits is $0^0$, which is an "indeterminate form," which means that the limit is unknown and/or might not even exist. This is just another way of saying that $x^y$ has no continuous definition at $(0,0)$, but that doesn't mean we are not allowed to define a value.
In computers, using floating point arithmetic, $0.0^{0.0}$ is often undefined because the values of $0.0$ are actually often the result of rounding, so that $0.0$ actually represents a range of values $(-\epsilon,+\epsilon)$. The discontinuity then becomes a problem.
The fundamental problem is that people still intuit zero as not a "foundational" number. That intuition goes back to ancient times, and it is a strong instinct.
So, the reason some accept $a^0=1$ is the argument:
$$a^0=\frac{a^1}{a^1}=1$$ which only works for $a=0$.
But when you start the number theory axioms with a $0$, you start with the axioms:
$$a+0=a\\a\cdot 0=0\\a^0=1$$
and then have the successor axioms:
$$a+(b+1)=(a+b)+1\\a\cdot(b+1)=a\cdot b + a\\a^{b+1}=a^b\cdot a$$
These axioms can also be seen a recursive definitions of addition, multiplication, and exponentiation.