Solving a Derivation of a $2\times2$ Matrix

107 Views Asked by At

I am currently doing some research and am working on the following problem:

Given $DT_2(R)$ where $DT_2(R)$ is the upper triangular matrix with the diagonal being the same element. Determine all mappings s.t. $DT_2(R)\to D T_2 (R)$

In order for it to be a derivation it must satisfy the following three conditions:

$(1)\,\,\,\, d(A+B)=d(A)+d(B)$

$(2)\,\,\,d(AB)=Ad(B)+d(A)B$

$(3)\,\,\,\, d(\alpha A)=\alpha d(A)$ where $\alpha$ is a scalar

Let's suppose $\begin {bmatrix} a&b\\0&a\end{bmatrix}$ $\to \begin{bmatrix} f(a,b)& g(a,b)\\0& f(a,b)\end{bmatrix}$ where $f,g: R\times R \to R$

So, I'm skipping (1) and (3), those are a little more trivial. The main focus here is (2), where most of the work will come from.

Taking (2) $d(AB)=Ad(B)+d(A)B$ and starting from the left side

$\begin {bmatrix} a&b\\0&a\end{bmatrix}$$\begin {bmatrix} x&y\\0&x\end{bmatrix}=\begin {bmatrix} ax&ay+by\\0&ax\end{bmatrix}\to\begin {bmatrix} f(ax,ay+bx)&g(ax,ay+bx)\\0&f(ax,ay+bx)\end{bmatrix}$ this is $d(AB)$

Just to clarify, the above step came from matrix multiplication and then applying the function, where $d(a,b,c,d,\ldots)=d(a), d(b), d(c),\ldots$

Now, the next step is doing the same from the right hand side.

$$d(AB)=Ad(B)+d(A)B$$

$$\begin{bmatrix}f(a,b)&g(a,b)\\0&f(a,b)\end{bmatrix} \begin{bmatrix}x&y\\0&x\end{bmatrix} + \begin{bmatrix}a&b\\0&a\end{bmatrix} \begin{bmatrix}f(x,y)&g(x,y)\\0&f(x,y\end{bmatrix}=\begin{bmatrix}f(a,b)x+af(x,y)&f(a,b)y+g(a,b)x+ag(x,y)+bf(x,y)\\0&f(a,b)x+af(x,y)\end{bmatrix}$$

And then since $d(AB)=Ad(B)+d(A)B$, we can say that each element in the designated matrices equals each other as well. Considering that the diagonal will be the same, and that we have 0 as one element, this then gives us two remaining functions. $f(a,b)x+af(x,y)=f(ax,ay+bx)$ and $f(a,b)y+g(a,b)x+ag(x,y)+bf(x,y)=g(ax,ay+bx$

When I began to think on how I could solve these equations, I went through elimination, substitution and all sorts of different methods, but with each one giving ugly solutions. Then I thought that maybe using linear combinations and Cartesian Products might work.

Taking a random example, something along the lines of $(a,b)=(a,0)+(0,b) \to f((a,0)+(0,b)) \to f(a(1,0))+f((0,1)b) \to af(1,0)+bf(0,1)=aa_1 +ba_2$ where $f(1,0)=a_1$, and $f(0,1)=a_2$

Continuing on...

Lets call $f(a,b)x+af(x,y)=f(ax,ay+bx)$ Equation A and $f(a,b)y+g(a,b)x+ag(x,y)+bf(x,y)=g(ax,ay+bx$ Equation B. Starting with equation A, using the process directly above, Ive broken each term up:

$f(a,b)x=f[a(1,0)+(0,1)b]=af(1,0)+bf(0,1)=aa_1 + ba_2$ when $f(1,0)=a_1$ and $f(0,1)=a_2$

$af(x,y)=af[x(1,0)+(0,1)y]=axf(1,0)+y(0,1)=axa_1 +ya_2$

$f(ax,ay+bx)=f[ax(1,0)+(0,1)ay+bx]=axf(1,0)+ay+bxf(0,1)=axa_1+(ay+bx)a_2$

Therefore, $f(a,b)x+af(x,y)=f(ax,ay+bx)=(aa_1+ba_2)x+(xa_1+ya_2)a=axa_1+(ay+bx+a_2$

Following the same process for Equation 2,

$f(a,b)y+g(a,b)x+ag(x,y)+bf(x,y)=g(ax,ay+bx$

$f(a,b)y=f(a(1,0)+(0,1)y)=af(1,0)+y(f(0,1)=aa_1+ ba_2$

$g(a,b)x=x[ag(1,0)+g(0,1)b]=x[ab_1+bb_2]$ when $g(1,0)=b_1$ and $g(0,1)=b_2$

$ag(x,y)=a[xg(1,0)+g(0,1)y]=a[xb_1+yb_2]$

$bf(x,y)=b[xf(1,0)+f(0,1)y]=b)xa_1+ya_2)$

$g(ax,ay+bx)=axg(1,0)+g(0,1)ay+bx=axb_1 +(ay+bx)b_2$

Therefore, $f(a,b)y+g(a,b)x+ag(x,y)+bf(x,y)=g(ax,ay+bx=(aa_1+ba_2)y+(ab_1+bb_2)x+(xb_1+yb_2)a+(xa_1+ya_2)b=axb_1+(ay+bx9b_2$

Which now gives me the following two equations to solve: $f(a,b)x+af(x,y)=f(ax,ay+bx)=(aa_1+ba_2)x+(xa_1+ya_2)a=axa_1+(ay+bx+a_2$ and $f(a,b)y+g(a,b)x+ag(x,y)+bf(x,y)=g(ax,ay+bx=(aa_1+ba_2)y+(ab_1+bb_2)x+(xb_1+yb_2)a+(xa_1+ya_2)b=axb_1+(ay+bx9b_2$

Now, in Equation B, I wasn't sure if I needing to designate a different variable to$g(0,1) and g(1,0)$ Had I been able to keep the previous notation $aa_1$ and $aa_2$ then I would now be able to cancel out a few terms using elimination. But again, this is where I get confused and have questions. Please help.

1

There are 1 best solutions below

0
On

Hints:

Notice that every element of $DT_2(R)$ can be wrriten as $aId+bN$, where $N=\begin{pmatrix}0 &1\\0&0\end{pmatrix}$. Thus, all elements of $DT_2(R)$ commute.

Notice also that $N^2=0$. Use property $(2)$ to obtain $Nd(N)=-d(N)N$. Thus, $Nd(N)=d(N)N=-d(N)N$. Thus, $d(N)N=0$. Prove that $d(N)=cN$, for some $c$.

Use property $(2)$ and the fact that $Id^2=Id$ to compute $d(Id)$.

Now, you can describe your $d$.