This is problem 2.16.2 in Etingof's introduction to representation theory. Note that problem 2.16.1 is a proof of Lie's theorem. I'm having trouble with the second case, where the base field has positive characteristic. Any hints are appreciated!
Problem: Classify irreducible finite dimensional representations of the two-dimensional Lie algebra $\mathfrak g$ with basis x, y and commutation relation $[x, y] = y$. Consider two cases (i) the base field has characteristic $0$, and (ii) the base field has positive characteristic. Is the Lie theorem true in positive characteristic?
Partial Solution (i) Since $\mathfrak g$ is solvable, Lie's theorem applies. The only finite dimensional irreps are linear.
(ii) (Notation): Let $V$ be a finite dimensional irrep of $\mathfrak g$. Elements $x, y\in \mathfrak g$ are lower-case. Their representatives in End($V$) are $X$ and $Y$, respectively. Let $k$ be the base field of characteristic $p$.
Let $v$ be any nonzero vector in $V$. Then $v$ generates a cyclic subrepresentation $\mathfrak g v = \{\alpha X v + \beta Y v: \alpha, \beta\in k \}$. Since $V$ is irreducible, $\mathfrak gv$ is $V$ or $0$. Thus dim($V$) is at most $2$. If $V$ is linear then $X$ and $Y$ are scalar, so they commute, hence $XY-YX = Y = 0$. In this case $V\cong kXv\cong k$ or $0$, if $X$ is also $0$.
If dim$(V) = 2$, then $\ldots$ I don't know. I see that the adjoint representation is reducible, since it the span of $y$ is invariant under $[y,-] = \begin{bmatrix} 0 & 0 \\ -1 & 0 \end{bmatrix}$ (nilpotent) and $[x, -] = \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}$ (idempotent), but I'm not sure this is relevant.
Thanks!
Ok I think, following the example in home.math.au.dk/jantzen/mntrl.pdf, that I have a solution in the case of positive characteristic. Sorry this is so wordy!
(ii) Such a representation consists of a finite dimensional vector space $V$ and a pair of matrices $X$, $Y$ in End($V)$ such that $XY - YX = Y$. Consider the subspace $YV$, which is invariant under $Y$. For $v$ in $V$, $XYv = (Y+YX)v = Y(v + Xv)$ in $YV$. Thus $YV$ is invariant under $X$, so that $YV$ is a lie subrepresentation of $V$.
Some computations: \begin{align*} YX^\alpha &= (X - I)^\alpha Y\\ XY^\alpha &= Y^\alpha X + \alpha Y^\alpha\\ XY^p &= Y^pX\\ (X^p - X)Y &= Y(X^p - X) \end{align*} So $Y^p$ and $X^p - X$ are in the center of $kX + kY$, \textbf{so} each acts as a scalar on $V$; that is $Y^pv = \lambda_Yv$, and $(X^p - X)v = \lambda_Xv$ or $X^pv = (X + \lambda_X I)v$. I think here we appealing to the universal algebra with this "so"
Then for all $v$ in $V$, $$ (Y - \lambda_Y^{1/p})^pV = (Y^p - \lambda_Y)v = 0 $$
Consider the case that $\lambda_Y = 0$, so $Y^pV = 0$. We will show that in this case, $Y = 0$ and $V$ is one dimensional. Consider the subspace $V_0$ of $V$ that is annihilated by $Y$. In particular, $Y^{p-1}V$ is contained in $V_0$. For $v$ in $V_0$ we compute $YXv = (XY - Y)v = 0$, which shows that $XV_0\subset V_0$. If $V_0$ is nonzero, $V_0 = V$ by irreducibility, and then $Y = 0$. Otherwise if $V_0 = 0$ then $Y^{p-1}V = 0$, which implies $Y^{p-2}V\subset V_0$. Repeating this logic we reach $0 = Y^{p - (p-1)}V = YV$, so $Y = 0$. So when $\lambda_Y = 0$, a nonzero irreducible representation of $\mathfrak g$ is just a matrix $X$ such that $XV = V$, and then $X$ is scalar by Schur's lemma for algebraically closed fields. Hence $V$ must be linear.
Now consider the case that $\lambda_Y$ is nonzero. Since $\lambda_Y^{1/p}$ is a generalized eigenvalue of $Y$, it is also an eigenvalue of $Y$ with an eigenvector $e_0$. Define $e_i = X^ie_0$. Recall $(X^p - X)e_0 = \lambda_Xe_0 = e_p - e_1$. This shows that $\{e_0, e_1, \ldots, e_p\}$ is linearly dependent. Also $$ Ye_i = YX^ie_0 = (X - I)^iYe_0 = (X - I)^i\lambda_Y^{1/p}e_0.\ \ (*) $$ This is some polynomial in $X$ applied to $e_0$, so still in the span of $\{ e_0, \ldots, e_{i}\}$. This shows that the $k$-span of $\{ e_0, \ldots, e_{p-1}\}$ is stable under $X$ and $Y$. Since this span is nonzero, it must be all of $V$.
We will show that this generating set for $V$ is in fact a basis, so that $V$ has dimension $p$. Since $e_0\not=0$, $\{e_0 \}$ is linearly independent. Proceed by induction. Take $\{ e_0, e_1, \ldots, e_k \}$ for $k\le p-1$. Assume that $\{ e_0, \ldots, e_{k-1} \}$ is linearly independent. Suppose we have a dependence relation in $\{ e_0, e_1, \ldots, e_k \}$, so that $e_k$ is a linear combination of $\{ e_0, \ldots, e_j \}$ with $j< k$. Let $k$ be minimal in this regard (so $\alpha_j$ below is nonzero). We compute \begin{align*} e_k &= \alpha_0e_0 + \cdots \alpha_je_j\\ Ye_k &= \alpha_0Ye_0 + \cdots \alpha_jYe_j\\ (X - I)^k\lambda_Y^{1/p}e_0 &= \alpha_0 \lambda_Y^{1/p}e_0 + \alpha_1 (X - I)^1\lambda_Y^{1/p}e_0 + \cdots \alpha_j(X - I)^j\lambda_Y^{1/p}e_0\\ (X - I)^ke_0 &= \alpha_0e_0 + \alpha_1 (X - I)^1e_0 + \cdots \alpha_j(X - I)^je_0\\ \sum_{i=0}^k\binom{k}{i}X^i(-1)^{k-i}e_0 &= \alpha_0e_0 + (\alpha_1e_1 - \alpha_1e_0 ) + \cdots\\ &+\alpha_s \sum_{i=0}^s\binom{s}{i}X^i(-1)^{s-i}e_0 + \cdots + \alpha_j\sum_{i=0}^j\binom{j}{i}X^i(-1)^{j-i}e_0\\ e_k + \sum_{i=0}^{k-1}\binom{k}{i}X^i(-1)^{k-i}e_0 &= e_k + \cdots + \alpha_s\sum_{i=0}^{s-1}\binom{s}{i}X^i(-1)^{s-i}e_0 + \cdots\\ &+ \alpha_j\sum_{i=0}^{j-1}\binom{j}{i}X^i(-1)^{j-i}e_0\\ \sum_{i=0}^{k-1}\binom{k}{i}X^i(-1)^{k-i}e_0 &= \sum_{s=0}^{j-1}\alpha_s\sum_{i=0}^{s-1}\binom{s}{i}X^{i}(-1)^{s-i} e_0 \end{align*} In the last line of the above computation, the coefficient of $e_j$ on the left is nonzero, but the coefficient of $e_j$ on the right is zero, a contradiction since $j< k$ and $k$ is minimal. Conclude that there is no such linear dependence $e_k = \alpha_0e_0 + \cdots \alpha_je_j$. Therefore, $\{ e_0, \ldots, e_{p-1}\}$ is a basis for $V$.
So a finite dimension irreducible representation of $\mathfrak{sl}(2)$ must have dimension $p$. In terms of the above basis we have the following matrices. Matrix $X$ has $1$s on the first sub diagonal until the last column, which is $\lambda_Xe_0 + e_1$. Matrix $Y$ is the upper triangular matrix whose $k^{\mathrm{th}}$ column is given by $$ YX^\alpha e_0 = (X - I)^\alpha Y e_0 = \sum_{i=0}^k\binom{k}{i}(-1)^{k-i}e_i $$
For example, when $p= 2$, $X = \begin{bmatrix}1 & \lambda_X^{1/2} \\ 0 & 1 \end{bmatrix}$ and $Y = \lambda_Y^{1/2}\begin{bmatrix} 1 & -1\\ 0 & 1 \end{bmatrix}$. Conversely, any nonzero $a$ and any $b$ in $\mathbb C$ give rise to such an irreducible representation via $\lambda_Y = a^p$ and $\lambda_X = b$.
For which pairs $(a, b)$, $(a^\prime, b^\prime)$ are the corresponding representations isomorphic? Suppose $\phi$ is such an isomorphism. Then we must have $$ a \phi(e_0) = \phi(Ye_0) = Y^\prime\phi(e_0), $$ so in fact $a^\prime = a$. From the identity $X(X^{p-1}e_0- e_0) = be_0$ We also must have \begin{align*} b\phi(e_0) = X\phi(X^{p-1}e_0 - e_0) &= X\left( \phi(X^{p-1}e_0) - \phi(e_0) \right)\\ &= X\left( X\phi(X^{p-2}e_0) - \phi(e_0) \right)\\ &\ \vdots\\ &= X\big( X^{p-1}\phi(e_0) - \phi(e_0) \big)\\ &= (X^p - X)\phi(e_0) \end{align*} which shows $b^\prime = b$. Thus isomorphism classes are trivial.
These dimension $p$ irreducible representations show that Lie's theorem does not hold for positive characteristic.