Consider the following extensive form game:
Its strategic form is
\begin{matrix} & C & D \\ A & (0,0) & (0,0) \\ B & (-1, 3) & (2,2) \end{matrix}
Question(s)
Are these two games completely equivalent?
Consider a repeated version of this game in which player $1$ always plays $A$. Then (it seems) if the first version of the game is played, player $1$ can never distinguish whether player $2$ has played $C$ or $D$ in the last period, while the same is not true for the second version of this game.
Is this correct? Is my reasoning erroneous?
Cheers!

Whether a game in normal form and a game in extensive form carry the same information is an important topic in a course in game theory, When we write down a game in normal form as a $2 \times 2$ matrix, we are almost always thinking of it as a game where both players choose simultaneously. In the extensive form, the nodes represent places where an agent chooses. It is customary also to show the information sets at each node.
Your game tree is not the same as the normal form for the game your showed. The corresponding extensive form would have player 1 choosing $A$ or $B$ at the root. Then there would be one information set below the root where player 2 chooses $C$ or $D$, not knowing what player 1 has done. Then the leaves of the tree would list out the four payoffs, distinguishing the two different ways that $(0,0)$ might arise. In particular, there would be a $(0,0)$ payoff under $(A,C)$ and another $(0,0)$ under $(A,D)$.
If you play the game twice, then the tree illustrating the extensive the extensive form of the game would be twice as long. (I can't draw it here.) There would be a root, and there would be payoffs at the leaves. At each information node, you would indicate whose turn it was to choose an action. Also, each information node would show what that player knows. A Nash equilibrium in that case would have to specify choices at each node for each player, and those choices would have to be mutual best responses. Further, the choices would have to be specified even for nodes that were not reached along the equilibrium path. Finally, one might want to examine a subset of Nash equilibria that were "subgame perfect" or based on some other refinement having to do with the sequential updating of information. My favorite textbook on all this is Osborne and Rubinstein.