Representation Theory

Introduction

Soon after getting into this topic, we discovered that we would only be dealing with the Representation Theory of Groups.

The question “why do we like representations” might be answered by a the following quote by Kevin Hartnett: “A representation provides a simplified picture of a group, just as a grayscale photo can serve as a low-cost imitation of the original color image. Put another way, it “remembers” some basic but essential information about the group while forgetting the rest.”

Second concern, there’s a good chance that even after reading through the technical explanation and the informal explanation, you’ll still have the feeling “I don’t get it”. The goal of those two explanations is to get you as close as we can, to the idea, and that when you see what is done in the example, that will take you the rest of the way.

The Technical Explanation:

A Representation of Group G is a Vector Space V along with a group homomorphism.

\rho : G \to GL(V)

V, \rho : G \to GL(V)

g_1, g_2 \in G

\rho(g_1 \: g_2) = \rho(g_1) \: \rho(g_2)

Alternate notation for the above:

(V, \rho) is a representation of G.

The Verbose Informal Explanation:

We might ask, could representation theory be crudely described as the result of us studying how we can find correspondence between a first representation and then a second representation? No, the second group has to be comprised of something is found in a Vector Space, something that exists in the realm of Linear Algebra.

We begin with a statement we found that intends to explain Representation Theory:

“Representation Theory takes abstract algebra structures and represents their elements with linear transformations of vector spaces.”

Exhale. It’s OK if none of that made sense. We will eventually create an example so that those words will begin to mean things to you. The linear transformation of vector spaces can be done using matrices. Here’s why we say that:

A matrix can do a rotation and a rotation can move a vector in a vector space. Now, change your thinking a bit. Instead of saying that the vector moving, say instead that the coordinate system is moving. If you keep pictures of “before” and “after”, someone can come along, and you can tell them that we have pictures of the vector in two different coordinate systems (x,y,z) and (x’,y’,z’).

“How did you get from the first vector space, (x,y,z), to the second vector space, (x’,y’,z’)?”

“Oh, I used a linear transformation of vector spaces. I used a rotation.”

It now remains to see what they meant by “abstract algebra structure”.

We find a sentence, “abstract algebra is the study of algebraic structures”. That tells us that we just need, now, to understand what an algebraic structure is.

Let S be a set that contains examples of algebraic structures, along with the stipulation that we may be missing one or more:

S = {Magma, Quasigroup, Monoid, Semigroup, Group}

Now, what makes an algebraic structure an algebraic structure?

We start with a set, call it S, that contains elements and those elements can be used in one or more operations. The operations must have finite arity and they are usually binary.

example, the elements -1 and +1 work with multiplication.

We must confirm that S and the operations pass a finite number of tests, called axioms, in order for someone to certify that S is an algebraic structure.

We will focus on Groups because they are used in the work we will be doing with Point Group Symmetry. To be a Group, S must have a single operation and satisfy four tests:

Building an Example:

We are going to have two different groups. Each group will have a binary operation. We will also have an operation that takes us from an element in the first group to an element in the second group.

For our work, the first group will contain elements that correspond to matrices and the second group will be a group of those matrices.

We are going to work with molecules such as Formaldehyde and Acetone, which have C_{2v} symmetry. For this, we will build the first group using the following Symmetry Elements:

  • E– Identity
  • C_2– Rotation of 180 degrees
  • \sigma_v(xz)– Reflection across xz plane
  • \sigma_v(yz)– Reflection across yz plane

A Cayley Table is shown below. It provides all the possible products from the elements of the group, and it is similar to a multiplication table. As an example, assume we want to take the product of \sigma_v(xz) and C_2. In the table, we go down to \sigma_v(xz) and then we go across to C_2 and we find that the product is \sigma_v(yz).

\begin{bmatrix} & E & C_2 & \sigma_v(xz) & \sigma_v(yz) \\ E & E & C_2 & \sigma_v(xz) & \sigma_v(yz) \\ C_2 & C_2 & E & \sigma_v(yz) & \sigma_v(xz) \\ \sigma_v(xz) & \sigma_v(xz) & \sigma_v(yz) & E & C_2  \\ \sigma_v(yz) & \sigma_v(yz) & \sigma_v(xz) & C_2 & E  \end{bmatrix}

Now for the matrices. Assume we have a mapping, “Convert” that goes from a symmetry operation to a matrix:

  • Convert(E) = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}
  • Convert(C_2) = \begin{bmatrix} cos 180^{\circ} & - sin 180^{\circ}  & 0 \\ sin 180^{\circ} & cos 180^{\circ} & 0 \\ 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix}
  • Convert(\sigma_v(xz)) = \begin{bmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix}
  • Convert(\sigma_v(yz)) = \begin{bmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}

Cayley Table of Matrices

\begin{bmatrix} & E = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} & C_2 = \begin{bmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix} & \sigma_v(xz) = \begin{bmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix} & \sigma_v(yz) == \begin{bmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}  \\ E = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} & \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} & \begin{bmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix} & \begin{bmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix} & \begin{bmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \\ C_2 = = \begin{bmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix} & \begin{bmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix} & \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} & \begin{bmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} & \begin{bmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \\ \sigma_v(xz) = \begin{bmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix} & \begin{bmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix} & \begin{bmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} & \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} & \begin{bmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \\ \sigma_v(yz) = \begin{bmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} & \begin{bmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} & \begin{bmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix} & \begin{bmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{bmatrix} & \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \end{bmatrix}

The above work has shown that symmetry elements and their corresponding matrices can represent each other. We have enough information here to prove the homomorphism.

We’d like to extend this to simple integers, which are frequently +1 and -1.

The goal is to come up with four numbers, one for each operation and when we replace each symbol by its number, we get a multiplication table that is completely correct.

The choices to have every symbol represented by +1 lead to a table that is guaranteed to be correct. Every multiplication will be +1*+1 which will result in +1 and every symbol is +1. The first line of every Character Table is a line of +1 values.

Having every choice be -1 won’t work because every multiplication product will be -1*-1=+1 and this result disagrees with the value chosen for every element, -1.

The remainder of possibilities are lines with both +1 and -1, and sometimes even other integers. We could try to get the other lines by guessing, but there is a trick we can use to reduce, or possibly eliminate, the need to guess. We take an x vector (1,0,0) or a y vector (0,1,0) or a z vector (0,0,1) and subject it to each operation and if after the operation, it is still (1,0,0) then the integer for that operation is +1 and if after the operation the vector moves to (-1,0,0), then the integer for that operation is -1.

For an x vector (1,0,0)

  • E– the vector stays at (1,0,0), no movement of any kind — the integer is +1
  • C_2– the rotation moves the vector to (-1,0,0) — the integer is -1
  • \sigma_v(xz)– the vector stays at (1,0,0), it just turns over — the integer is +1
  • \sigma_v(yz)– the vector is normal to the plane of reflection so it ends up at (-1,0,0) — the integer is -1

For a y vector (0,1,0)

  • E– the vector stays at (0,1,0), no movement of any kind — the integer is +1
  • C_2– the rotation moves the vector to (0,-1,0) — the integer is -1
  • \sigma_v(xz)– the vector is normal to the plane of reflection so it ends up at (0,-1,0) — the integer is -1
  • \sigma_v(yz)– the vector stays at (0,1,0), it just turns over — the integer is +1

For a z vector (0,0,1)

  • E– the vector stays at (0,0,1), no movement of any kind — the integer is +1
  • C_2– the rotation rotates the vector but it stays in place at (0,0,1) — the integer is +1
  • \sigma_v(xz)– the vector is normal to the plane of reflection so it ends up at (0,0,-1) — the integer is -1
  • \sigma_v(yz)– the vector is normal to the plane of reflection so it ends up at (0,0,-1) — the integer is -1

So at this point we have:

  • +1, +1, +1, +1
  • +1, -1, +1, -1 (x vector)
  • +1, -1, -1, +1 (y vector)
  • +1, +1, -1, -1 (z vector)

We don’t need to test the first line–a multiplication table where every value is +1 cannot have an error in it.

The work below tests +1, +1, -1, -1 (z vector)

\begin{bmatrix} & E & C_2 & \sigma_v(xz) & \sigma_v(yz) \\ E & E & C_2 & \sigma_v(xz) & \sigma_v(yz) \\ C_2 & C_2 & E & \sigma_v(yz) & \sigma_v(xz) \\ \sigma_v(xz) & \sigma_v(xz) & \sigma_v(yz) & E & C_2 \\ \sigma_v(yz) & \sigma_v(yz) & \sigma_v(xz) & C_2 & E \end{bmatrix}   -->  \begin{bmatrix} & +1 & +1 & -1 & -1 \\ +1 & +1 & +1 & -1 & -1 \\ +1 & +1 & +1 & -1 & -1 \\ -1 & -1 & -1 & +1 & +1 \\ -1 & -1 & -1 & +1 & +1 \end{bmatrix}

The work below tests +1, -1, -1, +1 (y vector)

\begin{bmatrix} & E & C_2 & \sigma_v(xz) & \sigma_v(yz) \\ E & E & C_2 & \sigma_v(xz) & \sigma_v(yz) \\ C_2 & C_2 & E & \sigma_v(yz) & \sigma_v(xz) \\ \sigma_v(xz) & \sigma_v(xz) & \sigma_v(yz) & E & C_2 \\ \sigma_v(yz) & \sigma_v(yz) & \sigma_v(xz) & C_2 & E \end{bmatrix} --> \begin{bmatrix} & +1 & -1 & -1 & +1 \\ +1 & +1 & -1 & -1 & +1 \\ -1 & -1 & +1 & +1 & -1 \\ -1 & -1 & +1 & +1 & -1 \\ +1 & +1 & -1 & -1 & +1 \end{bmatrix}

The work to similarly confirm +1, -1, +1, -1 (x vector) is not shown so it can be used for a homework problem.

Appendix A – So far, all of our work with algebraic structures has involved just one operation and the operation has been binary.

Appendix B – Representation Theory But Not in a Group

This was suggested by a Mockingbird student. It is an example that will fail the test given to prove it is a Group. It remains for us to try to find a reference to confirm that what we have below, is “representing”.

Assume that for some reason, you just couldn’t add Roman Numerals. So you make the following mapping:

  • I –> 1
  • II –> 2
  • III –> 3
  • IV –> 4
  • V –> 5
  • VI –> 6

 \begin{bmatrix} + & 1 & 2 & 3 \\ 1 & 2 & 3 & 4 \\ 2 & 3 & 4 & 5 \\ 3 & 4 & 5 & 6 \end{bmatrix}

Now you just copy the above table and replace each Arabic Integer with the Roman Numeral.

 \begin{bmatrix} + & I & II & III \\ I & II & III & IV \\ II & III & IV & V \\ III & IV & V & VI \end{bmatrix}

We have two tables and we can say one is a representation of the other. However, we cannot call the set {I, II, III, IV, V, VI} a group with respect to addition because we can do an addition such as V + II and the result (which is VII) is not in the set.