Let $\mathfrak{h} \subseteq \mathfrak{sl}_{3}(\mathbb{C})$ be the CSA consisting of the diagonal matrices and R the corresponding roots. Then R is a root system in $\mathfrak{h}^{\ast}$. I always see people referring to the picture root system $\mathfrak{sl}_{3}(\mathbb{C})$ as that root system. I don't understand why this makes sense, as $\mathfrak{h}^{\ast} \cong \mathbb{C}^{2} \ncong \mathbb{R}^{2}$.
2026-03-25 12:54:53.1774443293
Picture of Root System of $\mathfrak{sl}_{3}(\mathbb{C})$
1k Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in ABSTRACT-ALGEBRA
- Feel lost in the scheme of the reducibility of polynomials over $\Bbb Z$ or $\Bbb Q$
- Integral Domain and Degree of Polynomials in $R[X]$
- Fixed points of automorphisms of $\mathbb{Q}(\zeta)$
- Group with order $pq$ has subgroups of order $p$ and $q$
- A commutative ring is prime if and only if it is a domain.
- Conjugacy class formula
- Find gcd and invertible elements of a ring.
- Extending a linear action to monomials of higher degree
- polynomial remainder theorem proof, is it legit?
- $(2,1+\sqrt{-5}) \not \cong \mathbb{Z}[\sqrt{-5}]$ as $\mathbb{Z}[\sqrt{-5}]$-module
Related Questions in LIE-ALGEBRAS
- Holonomy bundle is a covering space
- Computing the logarithm of an exponentiated matrix?
- Need help with notation. Is this lower dot an operation?
- On uniparametric subgroups of a Lie group
- Are there special advantages in this representation of sl2?
- $SU(2)$ adjoint and fundamental transformations
- Radical of Der(L) where L is a Lie Algebra
- $SU(3)$ irreps decomposition in subgroup irreps
- Given a representation $\phi: L \rightarrow \mathfrak {gl}(V)$ $\phi(L)$ in End $V$ leaves invariant precisely the same subspaces as $L$.
- Tensors transformations under $so(4)$
Related Questions in ROOT-SYSTEMS
- At most two values for the length of the roots in an irreducible root system
- coefficients of the sum of roots corresponding to a parabolic subgroup
- Why is a root system called a "root" system?
- The Weyl group of $\Phi$ permutes the set $\Phi$
- $sn: W\rightarrow \{1,-1\},sn(\sigma)=(-1)^{l(\sigma)}$ is a homomorphism
- Isomorphism question if base $\Delta$ for a root system is equal to the set of positive roots.
- Order of $z\in Z(W)\backslash \{\rm{id}\}$ for $W$ the Weyl group.
- Every maximal toral subalgebra is the centralizer of some $1$-dimensional subalgebra
- What is a Minimal Parabolic Subalgebra?
- Serre's theorem
Related Questions in SEMISIMPLE-LIE-ALGEBRAS
- Why is a root system called a "root" system?
- Ideals of semisimple Lie algebras
- A theorem about semisimple Lie algebra
- A Lie algebra with trivial center and commutative radical
- Relation between semisimple Lie Algebras and Killing form
- If $\mathfrak{g}$ is a semisimple $\Rightarrow$ $\mathfrak{h} \subset \mathfrak{g} $ imply $\mathfrak{h} \cap \mathfrak{h}^\perp = \{0\}$
- How to tell the rank of a semisimple Lie algebra?
- If $H$ is a maximal toral subalgebra of $L$, then $H = H_1 \oplus ... \oplus H_t,$ where $H_i = L_i \cap H$.
- The opposite of Weyl's theorem on Lie algebras
- Show that the semisimple and nilpotent parts of $x \in L$ are the sums of the semisimple and nilpotent parts
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
Your confusion is understandable. It is true that the roots are originally defined as elements of $\mathfrak h^*$, which is a $\mathbb C$-vector space (and two-dimensional, hence abstractly isomorphic to $\mathbb C^2$). However, note that there are only finitely many roots; and further, if you choose two (linearly independent ones) of them, all the other roots are actually $\mathbb Z$-linear combinations of those two; in other words, all the roots actually live in a $\mathbb Z$-lattice inside that big complex vector space. In a way, we do not need complex scalars to describe the relation between the roots, just integer coefficients. (And this is "almost true" for all root systems, at worst you have to use very simple fractions like $1/2$ or $1/3$ beyond integers.)
In more intricate parts of the theory, this "root lattice", which here abstractly would just be $\mathbb Z^2$, and related concepts, play an important role.
Now why, instead of talking about the $\mathbb Z$- or $\mathbb Q$-span of the roots, do we go "almost all the way" up to $\mathbb C$ again, but stop at putting that $\mathbb Z$-lattice into a $\mathbb R$-vector space? I think because this is just the most intuitive way to visualise it: We have a good feeling for the geometry of Euclidean space, and you'll notice that the next thing is to look at certain scalar products, visualise reflections and rotations etc. This is all best visualised as happening in "lattices which sit inside a Euclidean space". Compare also the question: root system of semi-simple Lie algebra and passing into euclidean space, where it was asked why we not just look at the $\mathbb Q$-vector space spanned by the roots. (Here and here are other recent questions where I came up with the answer via imagining Euclidean space, as the idea of "hyperplanes" kind of demands.)
Added in reply to your comment: The next thing is that on the root system, one can define a kind-of-standard scalar product, and with this, we can talk about lengths of roots, and angles between them. So if we want to use our intuition for Euclidean space, we should make that scalar product match the standard Euclidean one.
In the case at hand, we can choose two roots $\alpha, \beta$ such that the full root system consists of $\alpha, \beta, \gamma:=\alpha+\beta$, and their negatives. The scalar product is made so that $(\rho, \rho)=2$ for all roots $\rho$, whereas $(\alpha, \beta)=-1$, and from this one can compute $(\alpha, \gamma)=1$ and the products for all other combinations of roots.
So to "realise" (pun intended) those roots in the standard $\mathbb R^2$ with the standard Euclidean scalar product $( , )_{Euclid}$, e.g. all roots should have length $\sqrt 2$. One realisation of this root system in $\mathbb R^2$ would be $\alpha \mapsto (\sqrt2,0)$, $\beta \mapsto (-\frac12 \sqrt 2, \frac12 \sqrt 6)$, accordingly $\gamma \mapsto (-\frac12 \sqrt 2, \frac12 \sqrt 6)$ etc. -- basically a standard hexagon but stretched to radius $\sqrt 2$. If one does not care about the scaling, it's easier to map $\alpha \mapsto (1,0)$, $\beta \mapsto (-\frac12 , \frac12\sqrt 3)$ etc. Either is what you see in your linked picture, where the length of the roots is up to your imagination. Of course you can also rotate this picture by the craziest irrational angles you can come up with, as long as the roots' relative positions to each other stay rigid (accordingly, the picture does not show a coordinate system "under" the roots).
Funnily, there is an easier realisation if instead of using $\mathbb R^2$ itself we embed the root system into a "skew" plane inside $\mathbb R^3$, with (the restriction of) the standard Euclidean scalar product there. Namely, send $\alpha \mapsto (1,-1,0)$, $\beta \mapsto (0, 1,-1)$, accordingly $\gamma \mapsto (1,0,-1)$ etc. See that the scalar product matches exactly, and we have nice integer coefficients! The only downside is that technically the $2$-dimensional vector space spanned by the roots is not $\mathbb R^2$ itself, but rather $V:= \lbrace (v_1,v_2,v_3) \in \mathbb R^3: \sum v_i=0 \rbrace$. Still, one often finds this identification the easiest. It also generalises nicely for higher $n$.
However, to map $\alpha$ to $(1,0)$ and $\beta$ to $(0,1)$ is not a good idea, because for this one would have to use a strange nonstandard scalar product on $\mathbb R^2$. The fact that in the root scalar product $(\alpha, \beta) =-1$ really means that the angle between $\alpha$ and $\beta$ is $2\pi/3$ a.k.a. $120°$, and to work with that, we should identify $\alpha, \beta$ with vectors which "really" have this angle in Euclidean space.