This entry is a direct continuation of this post and is part of a larger series on representation theory that starts with that post. In this blog post, we continue our study of semisimple rings and bring it to a conclusion by proving a classification. Then we apply this insight into the structure of semisimple algebras to the the group algebra to get more information on irreducible representations.
Endomorphism Rings of Semisimple Modules
The goal behind the following lemmas and corollaries is to compute the endomorphism ring of a semisimple module that is a finite direct sum of simple modules, the motivation being that to finite-dimensional representations in the semisimple case (cf. 3.20) and to a semisimple rings as a module over itself (cf. 3.24).
The idea behind the proof is simply to “multiply out” the hom set that gives us endomorphisms by pulling out (finite) direct sums from both arguments and then use Schur’s lemma to understand the hom sets between simple modules.
Definition 4.1 For a ring and a natural number
,
denotes the matrix ring with entries in
with the usual formulas for addition and multiplication.
The following lemma is a generalization of a well-known fact from linear algebra:
Lemma 4.2 Let be a right module over a ring
, then
Proof Lets index the direct summands in (so
for all
, but we’re keeping track of which component it is)
Note that as abelian groups, the isomorphism is clear, since we have
Now for each summand at least as an abelian group, which we can use to idenitfy elements in
with elements in
by sending the component living in
with the entry at the position
. The verification that this respects multiplication amounts to the same calculation that shows that composition of linear maps corresponds to matrix multiplication.
This gives a description of the endomorphism rings of finite direct sums of simple modules, where each simple submodule is in the same isomorphism class. To properly “multiply out” our endomorphism rings, we just need another application of Schur’s lemma:
Lemma 4.3 Let and
be modules over a ring
and suppose that
, then we have as rings
If is a
-algebra for some field
, then this an isomorphism of
-algebras.
Proof We always have an injection given by
. Here
means that we apply
on the first, and
on the second summand. Since addition and composition of such endomorphisms can be carried out component-wise, this is a ring homomorphism and if everything is a
-vector space, it is also
-linear. The image of that map is the set of endomorphisms of
that map
and
to itself. Given our conditions, if we have any endomorphism of
, restricting to
and composing with the projection
, gives a homomorphism
which is zero by assumption. This shows that the endomorphism maps
to itself, and by the same argument
, thus the map is surjective.
Corollary 4.4 Let be a right module over a ring
that is a finite sum of simple submodules
such that the
are pairwise non-isomorphic. Let
be the endomorphism rings, then
Proof By Schur’s lemma, there are no nonzero homomorphisms between semisimple modules that don’t have a simple submodule in common, so that we can apply 4.3 times to get that
. Now apply 4.2 to each factor.
Using this, we get a partial converse to Schur’s lemma:
Corollary 4.5 Let be a module that is a finite sum of simple submodules, then
is simple if and only if the endomorphism ring of
is a division ring.
Proof One direction is just Schur’s lemma. The other direction follows from 4.4 by noting that the only way that is a division ring is if
.
Enter the Matrix Ring
Corollary 4.4 already gives us a description of endomorphism rings of modules which are finite sums of simple submodules. In the description, matrix rings over division algebras appear. This means to deepen our understanding of those endomorphism rings, we should study matrix rings.
We begin by studying their ideals:
Lemma 4.6 Let be a ring and
, then the map from two-sided ideals of
to two-sided ideals of
, sending
to
is a bijection. (Here
is the subset of all elements in
where each entry is contained in
)
Proof It’s clear that is a two-sided ideal in
when
is a two-sided ideal in
. It’s also clear that the map
is injective (we can recover
from
just by looking at the set of elements of
that appear in the entries of the matrices in
.
For surjectivity, let be the matrix that is zero everwhere except at
, where it is
. Then if
is a twosided ideal and
, we compute that
is the matrix that agrees with
at the place
and is zero everywhere else. A calculation shows that the set of elements in
that appear as an entry in the place
is a two-sided ideal
in
, but then
, because we can permute the entries of a matrix that has only one non-zero entry by multiplying from the left and right by permutation matrices (and then by taking sums, we get that every element in
is contained in
.
On the other hand , because we can first multiply a matix from the left and right by permutating matrices and then multiply by
from the left and right to get that we could have also defined
as the set of elements which appear as any matrix entry for elements in
. Thus
.
This lemma implies in particular that if is a division ring, then
has no proper non-zero two-sided ideals. We can use this to together with another property that holds in general for finite-dimensional modules:
Lemma 4.7 Let be a module over a ring
and suppose that
contains a division ring
such that
is a finite-dimensional
-vector space. (We don’t require that
is a
-algebra, i.e. that
is commutative and in the center of
), then
contains a simple submodule.
Proof Start with any non-zero submodule of . e.g.
itself. If
is not simple, choose a proper non-zero submodule of
, then if that submodule is not simple choose a proper non-zero submodule etc.
Since the dimension over has to decrease at every step, this process has to terminate, which gives us a simple submodule.
Lemma 4.8 Let be a ring that has no proper nonzero two-sided ideals that has a minimal right ideal
, then
is semisimple and every simple module over
is isomorphic to
Proof Let be such a ring and let
be a minimal right ideal, then we can make a two-sided ideal out of
by taking the product
.
Since doesn’t contain proer nonzero two-sided ideals and
, we get that
. For each fixed
,
is a homomorphic image of the simple module
under the image of the right
-linear map
, so the image is either isomorphic to
(and hence simple) or zero by Schur’s lemma.
The implication (2.) implies (3.) (compare the proof or the remark after the proof) in 3.14 gives us that there is a subset such that
. Thus
is semisimple. 3.25 implies that every simple module is isomorphic to a direct summand in the sum
, but every summand of that is isomorphic to
which implies that every simple module is isomorphic to
.
A natural question is how to recover from
. The following lemma provides this, if we also have the module
:
Lemma 4.9 Let be a ring and consider
as the column space, which is a right
-module, then we get that
. If we think of
as a submodule of
, then
is given by all scalar multiples of the identity.
Let be the matrix that has zeroes everywhere except a
at
, let
be the vector in
that has zeroes everyhwere except a one in the
-th component. For any
let
be the permutation matrix associated to
such that applying that matrix corresponds to permuting the entries with
.
Then for any , we get that
. Now multiplying with
corresponds to making all entries zero except the first entry which is left untouched. Thus
is a multiple of
, so we can find a unique
such that
.
For every let
be the transposition that switches
and
, then we have
.
Since the form a basis, we have seen that
is just scalar multiplication (from the left) with
, or as a matrix
, where
denotes the identity matrix.
This proves .
Putting the last few lemmas together, we obtain the main result on modules over matrix rings over a division algebra:
Lemma 4.10 Let be a division algebra and
, then
is semismple and the unique simple right module over
is
and
. (Here we’re regarding
as row vectors so that the right
-action makes sense.)
Proof 4.6 implies that has no nonzero proper two-sided ideals and 4.7 implies that it has a simple submodule, so that 4.8 applies and
is semisimple with only one isomorphism class of simple modules.
The statement that is a simple
-module is just linear algebra:
Given any non-zero vector in
, for all
, there’s a linear transformations (i.e. a matrix) that sends
to
. Thus
is a simple module over
as every non-zero submodule is the whole module. The part about the endomorphism ring follows from 4.9.
Now we come to the main theorem about semisimple rings that classifies them and also relates their ring-theoretic properties to their modules.
Theorem 4.11 (Artin-Wedderburn) A ring is a semisimple iff there exist natural numbers
and division rings
such that
.
Here the is the number of simple modules
up to isomorphism,
is the endomorphism ring of
and
. In particular,
is uniquely determined and the
and
are unique up to isomorphism and permutation of the factors.
Proof Note that for every ring , considered as a right module over itself, we have an isomorphism of rings
by applying 4.9 with
. Let
by a decomposition of
into simple right submodules such that the
are pairwise non-isomorphic. Let
.
By 4.4, we have an isomorphism . Here
is uniquely determined as number of simple modules over
up to isomorphism. The
together with the
are uniquely determined as the endomorphism rings of the simple modules and the dimension of the simple modules over their endomorphism ring (cf. lemma 4.10 for modules over matrix rings over a division algebra and 2.5 for modules over a product).
For the reverse direction, note that matrix rings over division rings are semisimple by 4.10 and 3.15 implies that finite products of semisimple rings are semisimple.
After having finally proved the main theorem for semisimple rings, we can give lots of applications:
Corollary 4.12 Semisimplicity for rings is left-right symmetric, i.e. if is symmetric, so it is opposite ring
.
Proof Using 4.11, it’s enough to show that since the opposite ring of a division ring will be a division rings as well.
An isomorphism is given by transposition
.
Corollary 4.13 Let be a semisimple ring, then the following are equivalent:
is commutative
- For all simple
-modules
, the endomorphism ring
is a field and
.
Proof We apply 4.11: is commutative iff all
are
and all
are commutative, i.e. fields.
Corollary 4.14 Let be an abelian group, then every irreducible real representation of
is at most 2-dimensional.
Proof Let be an abelian finite group. Since
is the algebraic closure of
and has dimension 2 over
, we get that all finite field extensions of
have dimension at most 2. By 4.13 all simple
-modules are one-dimensional over their endomorphism rings, which are finite field extensions of
by 3.27 and 4.13, so the endomorphism rings are at most two-dimensional, thus all simple
-modules have dimension at most 2 over
.
Corollary 4.15 Let be a finite group and
be an algebraically closed field such that the characteristic of
doesn’t divide the order of
, then the following are equivalent:
is abelian
- All irreducible representations of
are one-dimensional
- The number of irreducible representations is equal to the order of
Proof The equivalence of (1) and (2) follows from 4.13 and 3.27 and 3.29, since 3.27 and 3.29 together imply that the endomorphism ring of any simple -module is
, so the statement follows from 4.13.
The equivalence of (2) and (3) follows from 3.30
Reminder For a group , the abelianization
is the largest commutative quotient of
. Explicitly, it is given by the quotient of the subgroup generated by all commutators. It has the universal property that every morphism from
to an abelian group factors uniquely through the map
.
Corollary 4.16 Let be a finite group and
be an algebraically closed field such that the characteristic of
doesn’t divide the order of
, then the number of one-dimensional representations of
up to isomorphism is equal to the order of the abelianization
Proof One-dimensional representations are homomorphisms .
Since is commutative, these correspond to homomorphisms
, i.e. one-dimensional representations of
. One-dimensional representations are automatically irreducible and 4.15 tells us that since
is abelian, there are exactly
up to isomorphism.
The Center of the Group Algebra
The last corollary gives a partial answer to the question for the number of irreducible representations of a group, by characterizing the number of one-dimensional representations in terms of the group theory of . In this section, we give a group-theoretic characterization of the number of all irreducible representations. (Given suitable assumptions on the field)
Definition 4.17 If is a ring, then the center
is defined as
. It is a commutative subring of
.
Lemma 4.18 If is a ring and
, then
consits of those diagonal matrices where all diagonal entries are the same value that lies in
. So we have an isomorphism of rings
.
Proof We rergard as row vectors which is naturally a right
-module. Then for
, the map
is
-linear, because for
, we have
.
By 4.9 is a diagonal matrix where all diagonal entries are the same. Clearly the diagonal entry must lie in
, as
commutes with other such diagonal matrices.
Lemma 4.19 If and
are rings, then
.
Proof This is an easy computation. Multiplication in is defined component-wise.
Corollary 4.20 If is a semisimple ring with the Artin-Wedderburn decomposition
, then
.
Corollary 4.21 If is a semisimple algebra over a field
, then
where
is the number of simple
-modules up to isomorphism. We have equality if
is algebraically closed.
Proof By 4.11, the number of simple -modules up to isomorphism is
if
. By 4.20, we have
. Each factor has least dimension
, which shows the inequality.
If is algebraically closed, then
for all
by 3.29 which shows that equality holds.
Corollary 4.22 If is semisimple real algebra and
is the number of simple
-modules up to isomorphism, then
Proof Argue as in the proof of 2.21 and use the fact that where
is a finite-dimensional division algebra over
has to be isomorphic to
or
, since it is a finite field extension of
, so the dimension is at most two.
These corollaries relate the number of simple modules of a semisimple algebra and their center. Thus the next thing to do is to study the center of group algebras.
Reminder If is a group, then for
, the conjugacy class of
consists of all elements in
that are conjugate to
. In other words, if we consider the action
, then this is the orbit of
under this action. The different conjugacy classes are the minimal subsets of
that are closed under conjugation and form a partition of
.
Lemma 4.23 Let be a field and let
be a group (we don’t need it to be finite). Then
has a
-basis given by the set of elements of the form
where
varies over all conjugacy classes of
with finitely many elements.
Proof Since is a
-algebra with a
-basis given by
, we can check if an elements is in the center just by checking if it commutes with every element of
. Let
be an element of
(so all but finitely many
are zero. Then
is in the center of
if and only if for all
, we have
.
Writing out , this equation becomes
. Here we used that
is a bijection.
Comparing coefficients, this is equivalent to for all
.
If this holds for all , then we get that the (finite) set of elements
such that
is closed under conjugation and if two elements inside
are conjugate, their coefficients are equal. Take a system of representatives for
where
acts on
by conjugation. Then by the above considerations, we get that
, where
is the conjugacy class of
. This shows that the set of elements of the form
where
is a finite conjugacy class is a generating system for
. It is linearly independent because distinct conjugacy classes are distinct.
Theorem 4.24 If is a finite group and
is a field such that the characteristic of
doesn’t divide the order of
and let
be the number of irreduible representations, up to isomorphism. Let
be the number of conjugacy classes of
. Then
with equality if
is algebraically closed. If
, then we have
.
Proof By 4.23 is the dimension of the center of
. Now apply 4.21 and 4.22.
Thus we have obtained a relation between the number of irreducible representations and the element structure of by heavily employing the structure theory for semisimple rings. This is a nice illustration of the power of ring-theoretic tools in representation theory.