$\def\r{\Bbb R}$ $\def\q{\Bbb Q}$ As the title suggest, I am trying to do something that I know cannot be done, so my question is confused, and I am trying to make sense of it. Generally: (1) How does one make sense of what I describe below, and (2) are there known results and known terminology applicable, related to my comments? (I am tempted to think of some generalizations of rings, monoids, but I can't make sense of it without using a unit.)
So, I was reading about irreducible matrices (and relations to strongly connected graphs, online), after starting with the book Dynamical Systems and Ergodic Theory by Pollicott and Yuri. As an example of an irreducible matrix they give $A=\begin{pmatrix} 0 & 1 & 1 \\ 0 & 1 & 1 \\ 1 & 0 & 0 \end{pmatrix}$. Since the first two rows coincide, clearly $\det(A)=0$ and $A$ is not invertible.
Nevertheless one may look at powers $A^n$ of $A$, as well as at $A^n-A^{n-1}$. Certainly this is possible for $n\ge2$, and my computer (using computer algebra Reduce) happily evaluates the case $n=1$ too, as $A^1-A^{1-1}=\begin{pmatrix} -1 & 1 & 1 \\ 0 & 0 & 1 \\ 1 & 0 & -1 \end{pmatrix}$ (obviously using $A^{1-1}=I=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$), but the case $n=0$, that is $A^0-A^{-1}$ generates an error message ``Singular matrix'' (of course, as $A^{-1}$ does not exist).
But, it turns out $A^{n+1}-A^n= A^{n-1}$ for $n\ge2$. For example,
$n=2$, then $A^3=\begin{pmatrix} 1 & 2 & 2 \\ 1 & 2 & 2 \\ 1 & 1 & 1 \end{pmatrix}$, and $A^3-A^2=\begin{pmatrix} 0 & 1 & 1 \\ 0 & 1 & 1 \\ 1 & 0 & 0 \end{pmatrix}= A^1$.
$n=3$, then $A^4=\begin{pmatrix} 2 & 3 & 3 \\ 2 & 3 & 3 \\ 1 & 2 & 2 \end{pmatrix}$, and $A^4-A^3=\begin{pmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 0 & 1 & 1 \end{pmatrix}= A^2$.
$n=4$, then $A^5=\begin{pmatrix} 3 & 5 & 5 \\ 3 & 5 & 5 \\ 2 & 3 & 3 \end{pmatrix}$, and $A^5-A^4=\begin{pmatrix} 1 & 2 & 2 \\ 1 & 2 & 2 \\ 1 & 1 & 1 \end{pmatrix}= A^3$.
$n=5$, then $A^6=\begin{pmatrix} 5 & 8 & 8 \\ 5 & 8 & 8 \\ 3 & 5 & 5 \end{pmatrix}$, and $A^6-A^5=\begin{pmatrix} 2 & 3 & 3 \\ 2 & 3 & 3 \\ 1 & 2 & 2 \end{pmatrix}= A^4$.
(As seen the Fibonacci numbers are involved too.)
From the above, one if tempted to (recursively) define $A^{n-1}=A^{n+1}-A^n$ for all $n\le1$. For example,
$A^0=A^2-A^1 = \begin{pmatrix} 1 & 0 & 0 \\ 1 & 0 & 0 \\ -1 & 1 & 1 \end{pmatrix} \not= I$. Even though $A^0$, defined this way, is different from $I$, it acts like $I$ when multiplied to $A^n$, $n\ge1$. For example, $A^0\cdot A=A\cdot A^0=A$, $A^0\cdot A^7=A^7\cdot A^0=A^7$, etc. Also, $(A^0)^2=A^0$, and $(A^0)^n=A^0$ for $n\ge1$.
Similarly, one may define $A^{-1}=A^1-A^0 = \begin{pmatrix} -1 & 1 & 1 \\ -1 & 1 & 1 \\ 2 & -1 & -1 \end{pmatrix}$. (And, one may define $A^{-n}$ for all $n\ge1$.) Even though $A$ is not invertible, the $A^{-1}$ as defined above behaves like an inverse of $A$, namely $A^{-1}\cdot A=A\cdot A^{-1}=A^0 = \begin{pmatrix} 1 & 0 & 0 \\ 1 & 0 & 0 \\ -1 & 1 & 1 \end{pmatrix}$.
So, my (confused) question (in addition to (1) and (2) at the beginning) is: (3) What am I observing (assuming it has already observed and is well-known)?
Edit. I think I understand better what I was asking (or what had confused me) and will post my comments here. This is essentially an answer, but I will leave the option open for someone else to post their answer, as they find appropriate, as the question was a bit open ended, and may get different types of answers. (Also, after typing this edit, I feel there are more details to be verified.)
QiaochuYuan left a comment which I initially did not understand, but now I think he meant something along what is illustrated by the following example. Let $F=\{\begin{pmatrix} x & 0 \\ 0 & 0 \end{pmatrix}: x\in\r\}$. Then $F$ is a field isomorphic to $\r$, with additive identity $O=\begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix}$ and multiplicative identity $U=\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}$, and if $X=\begin{pmatrix} x & 0 \\ 0 & 0 \end{pmatrix}\not=O$ then $X^{-1}=\begin{pmatrix} x^{-1} & 0 \\ 0 & 0 \end{pmatrix}$. The latter does not contradict the fact that $\begin{pmatrix} x & 0 \\ 0 & 0 \end{pmatrix}$, as a matrix, is not invertible. If $Id=\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$ is the identity matrix, this looks like a contradiction, having two different multiplicative identities, namely $U$ and $Id$, but there is no contradiction as simply $Id$ does not belong to $F$.
It didn't seem that QiaochuYuan addressed the identity $A^{n+1}-A^n= A^{n-1}$ for $n\ge2$ (and the latter seemed special to me). Let $E= \begin{pmatrix} 1 & 0 & 0 \\ 1 & 0 & 0 \\ -1 & 1 & 1 \end{pmatrix}$ (which I had denoted by $A^0$ above, but I prefer $E$ now). $E$ is the multiplicative identity of the structure described in my question, and I was puzzled as it looked like there were two different multiplicative identities, namely $E$, and $I= \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$. It seemed to me that if we use the usual matrix multiplication, then there was only one choice for the multiplicative identity, namely $I$. The matrix $E$ isn't even a diagonal matrix, and it looked strange it would be a multiplicative identity (and I must have forgotten that matrices have normal forms). Well, I verified the details later, and $E$ is indeed the multiplicative identity, while $I$ simply does not belong to this structure.
Call the structure hinted at in my question $K$. So $K$ contains $A=\begin{pmatrix} 0 & 1 & 1 \\ 0 & 1 & 1 \\ 1 & 0 & 0 \end{pmatrix}$ as well as all $A^n$, $n\ge1$ (usual matrix multiplication). We have that $A^{n-1}=A^{n+1}-A^n$ for all $n\ge2$, and this suggest that $E=A^2-A= \begin{pmatrix} 1 & 0 & 0 \\ 1 & 0 & 0 \\ -1 & 1 & 1 \end{pmatrix}$ would play the role of the multiplicative identity. Once this is done, we could define the inverse of $A$, as $A-E$ (where $E$ plays the role of $A^0$). I had used the notation $A^{-1}$ in my question, but I confuse myself with that as I think in this context it should be reserved for matrix inverse (which for $A$ does not exist). So, if $n\ge1$, I will denote the multiplicative inverse of $A^n$ (in $K$) by $A^{[-n]}$. In particular, $A^{[-1]}=A-E=\begin{pmatrix} -1 & 1 & 1 \\ -1 & 1 & 1 \\ 2 & -1 & -1 \end{pmatrix}$, then $A^{[-2]}=E-A^{[-1]}=\begin{pmatrix} 2 & -1 & -1 \\ 2 & -1 & -1 \\ -3 & 2 & 2 \end{pmatrix}$, next $A^{[-3]}=A^{[-1]}-A^{[-2]}$, etc. (using the identity $A^{n-1}=A^{n+1}-A^n$ as a model).
Let $S=\{A^{[-n]}:n\ge1\}\cup\{E\}\cup\{A^n:n\ge1\}$. Then $S$ is a commutative ring under usual matrix multiplication, with identity element $E$. Since of course we could add and subtract the elements of $S$ (as matrices), I was confused that $S$ is a field, and I knew that couldn't be, as the multiplicative group should contain a copy of the rationals $\q$, whereas $S$ seems isomorphic as a group to $(\Bbb Z,+)$. (But it was getting late.)
I had simply forgotten that $S$ is not closed under addition (and subtraction). So $S$ is not a field, but it generates a field. Let $K$ be the field that is generated by $S$. I will present a couple of more specific descriptions of $K$ below.
First, every element of $S$ is of the form $\begin{pmatrix} q & r & r \\ q & r & r \\ p & q & q \end{pmatrix}$, with $p+q=r$ (so obviously such an element is completely determined by $p$ and $q$). Every element of $K$ is of this form too, where $p,q,r\in\q$ and $p+q=r$. The operations are usual matrix addition and multiplication, with the usual zero matrix, but with $E$ for the multiplicative identity.
Suppose that there is a field isomorphism $h:K\to L \subset \r$. (There is one, indeed, described below.) Let $a=h(A)$, then $1+a=a^2$ since $E+A=A^2$. The solutions for $a$ are the golden section $\varphi=\frac{1+\sqrt{5}}2\approx1.618$ and $\psi=\frac{1-\sqrt{5}}2\approx-0.618$. Thus, $K$ is isomorphic to the field extension $\q(\sqrt{5})$ (hmm, I didn't verify if matrix multiplication goes into usual multiplication in $\r$, so I may be wrong, but will keep writing).
Let $(p,q)$ abbreviate the matrix $\begin{pmatrix} q & r & r \\ q & r & r \\ p & q & q \end{pmatrix}$, where $p+q=r$. Then $h(E)=h(-1,1)=1$, $h(A)=h(1,0)=a$, and $h(A^2)=h(0,1)=a^2$. Thus $h(p,q)=pa+qa^2$. Note also that $\pm\sqrt{5}=3a-a^2$. There are two possibilities for $h$. Either (1), $a=\varphi$ and then $h(p,q)=\frac{p+3q}2+\frac{p+q}2\sqrt{5}$, or (2), $a=\psi$ and then $h(p,q)=\frac{p+3q}2-\frac{p+q}2\sqrt{5}$. I feel I didn't verify all details, but it is getting late again. If what I wrote in this edit is incorrect, then the question remains as to explain what the above example is or does.
There are probably many different directions to take this. Here's what I've come up with. If we put $A^{(n)} = A^n$ for $n \ge 1$, you have observed that we have the recurrence $$A^{(n+2)} = A^{(n+1)} + A^{(n)}\tag{F}.$$ You then used this recurrence to extend the definition of $A^{(n)}$ to all integers $n$. We know that the relation $A^{(n+m)} = A^{(n)}A^{(m)}$ holds for $n,m \ge 1$. We can make an inductive argument that this relation holds when $n$ and $m$ are any integers as follows. First, assume that $m \ge 1$. Then, if $n$ is any integer, and we assume the result holds for all larger values of $n$, we obtain $$A^{(n)}A^{(m)} = (A^{(n+2)} - A^{(n+1)})A^{(m)} = A^{(n+m+2)} - A^{(n+m+1)} = A^{(n+m)}.$$ We then let $n$ be arbitrary and induct in the same manner on $m$ to complete the argument. This result shows in particular that $A^{(0)}$ acts as an identity element under matrix multiplication for the set $\{ A^{(n)} \}$ (as was suggested by DavidP in the comments). Indeed, this set is a cyclic group, and the inverse of $A^{(n)}$ is $A^{(-n)}$.
There is a more general result lurking behind all this. Instead of the two variable relation $A^{(n+m)} = A^{(n)}A^{(m)}$, consider instead the recurrences $$A^{(n+1)} = A \cdot A^{(n)} = A^{(n)} \cdot A.\tag{P}$$ Together with $A = A^{(1)}$, these imply the two variable relation. We can rephrase our results by saying that we extended the definition of $A^{(n)}$ using relation $(\mathrm{F})$, and then discovered that $(\mathrm{P})$ held as well, with the consequence that $A^{(-1)}$ acts as the inverse of $A^{(1)}$. The more general result is:
Let $R$ be a (not necessarily commutative) ring. Suppose the sequence $x_0, x_1,x_2,\ldots \in R$ satisfies, for $n \ge 0$, the relations $$\sum_{i=0}^r c_i x_{n+i} = 0\tag{1}$$ and $$\sum_{j=0}^s d_j x_{n+j} = 0\tag{2}$$ for some elements $c_0,\dots,c_r,d_0,\dots,d_s$ of $R$. Suppose for $0 \le i \le r$ and $0 \le j \le s$ that $c_i$ and $d_j$ commute. If $c_0$ is invertible, there is a unique extension of $\{x_n\}$ to all integers that satisfies $(1)$, and this extension also satisfies $(2)$.
Proof:
The relation $(1)$ is equivalent to $x_n = -c_0^{-1}\sum_{i=1}^r c_i x_{n+i}$, which implies the existence and uniqueness of the extension. We know that $(2)$ holds for $n \ge 0$. Now let $n$ be an integer and suppose inductively that $(2)$ holds for all larger values of $n$. Then, $$\begin{align*} d_0 x_n & = -d_0 c_0^{-1}\sum_{i=1}^r c_i x_{n+i} = c_0^{-1}\sum_{i=1}^r c_i (-d_0 x_{n+i}) = c_0^{-1}\sum_{i=1}^r c_i \sum_{j=1}^s d_j x_{n+i+j} \\ & = c_0^{-1}\sum_{j=1}^s d_j \sum_{i=1}^r c_i x_{n+i+j} = c_0^{-1}\sum_{j=1}^s d_j (-c_0x_{n+j}) = -\sum_{j=1}^s d_j x_{n+j}. \end{align*}$$ Thus, $(2)$ is verified.