Diameter of the Cayley Graph of the

Restricted Burnside Group R(2,5)

SUMMARY REPORT

Todd D. Mateer

August 7, 1996

DIMACS Center at Rutgers University

Dr. Charles Sims, Project Advisor

PROBLEM DESCRIPTION

A Cayley graph is a way to organize the behavior of the interactions between members of a group. Each element in the group is represented in the graph by a vertex. An edge is placed between two vertices when it is possible to go from one element to the other by multiplying one of the elements by one of the group's generators or the inverse of one of these generators. A result that is interesting to determine is the "diameter" of the graph. By computing this result, one is able to determine the maximum number of generator multiplications that will produce the entire group. Since the group R(2,5) has so many elements, it is impossible to determine this result exactly with current computer technology. The best one can hope to do is to determine the best upper and lower bounds possible with the searching algorithms and computer memory currently available.

This REU summer research project was devoted to determining the best upper bounds possible for the diameter of the Cayley graph of R(2,5) in eight weeks. My research resulted in an improvement of this upper bound from several thousand to 236. Through computational analysis conducted by my advisor Dr. Charles Sims, the problem of analyzing R(2,5) was reduced to analyzing a normal subgroup of 524 elements. Since the subgroup was elementary abelian, I was able to consider the subgroup to be a vector space of dimension 24 over the field Z_5.

The best approach to determining the diameter of the Cayley graph is the breadth-first search algorithm. A queue data structure is used to perform this analysis. Dr. Sims provided me with a data file consisting of all members of the vector space whose counterparts in the normal subspace can be generated by the products of no more than 30 of the generators and inverses of R(2,5). The algorithm starts by enqueing the zero vector with an associated move score of 0. A loop is then performed until the queue is empty. Each iteration of the loop begins by dequeing the element at the head of the structure. This element is added to every vector in the data file while keeping a record of the total multiplications of R(2,5) elements required to produce each new element. Every new element is then added to the queue. Once the queue is empty, the algorithm checks to see if all of the elements in the vector space have been hit and if so reports the maximum move score required to produce all members in the vector space.

Although the breadth-first search provides the most accurate answer for the desired upper bound, it is simply too expensive in terms of computer memory. Storing 5^24 elements in the computer as integer bit fields requires several million gigabytes, an unrealistic expectation of even today's most sophisticated computers. My analysis then turned to a multiple-step approach similar to that developed by Thistlethwaite in his analysis of the Rubik's Cube Cayley graph diameter.

In this multiple step approach, the 24 dimensional vector space is first mapped into a 15-dimensional subspace and then another of dimension 7 before approaching the zero vector. A breadth-first search can be used to make the transition between the 24-dimensional and 15-dimensional vector spaces. To reach the other two subspaces, a second algorithm was developed.

The second algorithm begins by using a quicksort algorithm to sort the vectors lexigraphically where the rightmost columns have the most significant digits. This organizes the data file into blocks that match in these rightmost columns. First, elements that have all zeros in their rightmost positions immediately qualify as members of the subspace. Additional subspace elements are achieved by "subtracting" the similar elements that appear in each block. Since the additive inverse of each vector occurs elsewhere in the data file and has the same move point value, this is a legal operation. The columns to the left of the zeros represent each element in the subspace. A quota is maintained to enforce an even distribution of the subspace elements for use in the second iteration of the algorithm. If the first subspace has been completely generated by the data file elements, then the members of the 15-dimension subspace are used to generate the subspace of dimension 7. This approach allows any element in the 24-dimensional vector space to be generated as a sum of a vector produced by the breadth-first search and two generated by the second algorithm.

Although the new strategy was successful in producing all of the vector space elements, it produced disappointing upper bounds, especially during the second iteration of the second algorithm. The results were worse than could have been generated using the standard basis for a 24-dimensional vector space. The disappointing results was caused by a shortage of subspace vectors already included in the data file. Dr. Sims provided the suggestion of using a linear transformation on the data file vectors that would produce a new set of vectors that included the standard basis. This process would also transform many additional data file vectors into subspace members. Since a linear transformation is 1-1 and onto, the new set of vectors can be analyzed and will produce the same upper bound for the Cayley graph diameter as can be determined from the data file.

The strategy used to determine the linear transformation begins by finding 24 linearly independent vectors in the data file. A theorem from Linear Algebra declares that a 24-by-24 matrix of linearly independent vectors is invertible. If the matrix of linearly independent vectors is multiplied by its inverse, then the identity matrix will be produced. Multiplying the data file by the transformation results in a new vector set that includes the identity matrix rows and many others that contain numerous zeros in their columns.

The set of independent vectors is constructed one vector at a time. When n vectors are considered, an n-by-n matrix containing the rightmost columns of each vector is considered. This matrix is row-reduced by Gaussian elimination. Since the resulting matrix is upper triangular, the determinant can be calculated by multiplying the diagonal components. Another theorem from linear algebra states that a set of n vectors is linearly independent iff the corresponding n-by-n matrix does not have a determinant of 0.

Once the above process has generated a set of 24 linearly independent vectors, the inverse can be obtained by solving 24 equation systems consisting of the n-by-n matrix augmented with each of the identity matrix columns. Since the process only needed to be performed once, Gaussian elimination was used in spite of the fact that faster algorithms exist. When the inverse was obtained, each data file vector was transformed by multiplying it by this matrix. The transformed data file was analyzed using the breadth-first search and two iterations of the subspace algorithm.

This third approach was successful in achieving an upper bound of 236. When more powerful computers are available for this project, it is hoped that this result can be improved by analyzing the vector space in two steps of 12 columns each. However, the substantial improvement in the Cayley graph diameter upper bound of R(2,5) from several thousand to 236 demonstrates the powerful role that using computational techniques can play in the study of Abstract Algebra.

This research experience has provided me with the opportunity to successfully demonstrate many skills learned through the Mathematics, Computer, and Electrical Engineering departments of Grove City College to an open research problem. Specifically, knowledge learned in Linear Algebra, Discrete Mathematics, and Abstract Algebra played a central role in this mathematical analysis. After experiencing the limitations of the GAP computational package with respect to memory storage, I rewrote my entire computer program in the C programming language. Significant use of data structures, algorithms, and binary arithmetic was involved in this process. Finally, my background in the engineering discipline allowed me to make numerous decisions that allowed me to maintain a balance among program accuracy, computer memory required, and the speed of the algorithm.

This research experience has also taught me several new skills. As my college does not have a fully established Computer Science curriculum, I had the opportunity to read several algorithm texts and was successful in integrating several of these algorithms into my computer program. Furthermore, I had an Abstract Algebra course that avoided the use of computers and numerical groups such as Z5. My REU research experience allowed me the opportunity to make up for this deficiency in my undergraduate education by exposing me to the GAP software package and requiring me to perform significant arithmetic and linear algebra in the Z5 field. Finally, this project provided me with my first exposure to the UNIX operating system environment as my college is still limited to the VAX environment.

This research experience has provided me with a strong desire to perform a graduate level degree. In the Fall of 1997, I hope to major in Computational Science in pursuit of my doctoral degree in preparation for a teaching career on the college level. My planned area of study will allow me the opportunity to continue my interdisciplinary curriculum and will prepare me to solve similar problems to that conducted in my REU program. My experience this summer with DIMACS has been an extremely valuable opportunity that will provide numerous benefits for the months and years to come.