Given the fact that an official $b$-normal number
whose infinite sequence of digits in base $b$ is distributed uniformly in the sense that each of the $b$ digits has the same natural density 1/$b$, also all possible pairs of digits are equally likely with density $b^{-2}$, all triplets of digits equally likely with density $b^{-3}$, etc.
contains each sequence of digits infinitely often:
Might this alternative definition of a $b$-normal number be somehow "equivalent" to the offical one:
A number is $b$-normal iff its infinite sequence of digits in base $b$ contains each sequence of the $b$ digits infinitely often.
Respectively: In which respects do the two definitions agree and/or deviate? What makes the official one "better", "more natural", "better to handle with",...?
They are not equivalent. Your definition includes numbers that are not included in the usual definition and I find contrary to the intent. One example would be a take-off on the Champernowne constant: $$0.123456789\underbrace{000000000}_\text {9 0's}10111213\ldots9899\underbrace{000000\ldots000000}_\text{180 0's}100101102\ldots$$ where we just insert a run of zeros as long as the digits in the previous decade. This number still includes all finite strings of digits, but the density of zeros varies up to $0.55$ and after we get started never gets down to $0.1$
When you have two inequivalent definitions, the way to choose is either the one that is more useful to prove theorems about or the one that reflects the notion you are trying to describe. I think of normal number as trying to say "there is nothing special about the distribution of the digits once you go out far enough" and the Champernowne constant as showing a weakness in the definition.