DIMACS
DIMACS REU 2018

General Information

me
Student: Martin Hora
Office: CoRE 448
School: Charles University in Prague
Coworkers: Vaclav Koncicky, Peter Korcsok, Radim Predstirany, Michael Skotnica, Ondrej Splichal, Aneta Stastna, Jakub Tetek,
Projects: Algorithms for systems with embedded FPGA

I am part of the Czech group. We work together on several diferent projects. I, along with Jakub and Vaclav, mailny focus on develepoing algorithms for systems with embedded FPGA. The descriptions of our other projects can be found on webpages of my colleagues.


Theoretical Model of Computation and Algorithms for FPGA-based Hardware Accelerators

A field-programmable gate array (FPGA) is an integrated circuit that can be repeatadely reconfigured by a customer. It consists of programmable logic blocks which implements logic functions (AND, OR, XOR, etc.) and programmable routing that connects logic blocks.

A FPGA is usually used to speed up evaluation of functions such as SHA-256. A FPGA allows time multiplexing that means that it can simultaneously process multiple inputs as long as each computation is in a different phase.

Intel is about to release the first CPU with integrated FPGA. Our goal is to develop efficient algorithms for such systems.


Weekly Log

Week 1:

We designed a realistic theoretical model for computation with embedded FPGA. We called the model Pipelined-Circuit RAM (PCRAM). In order to do that we had to research properties of FPGAs and determine all relevant parameters of a FPGA chip such as the nuber of logic blocks and the number of I/O pins.

Week 2:

We developed and analysed following algorithms in PCRAM model:

  • An algorihm for computig of a general associative operation of $n$ numbers (eg. the sum of $n$ numbers).
  • An efficient sorting algorithm based on QuickSort.

Furthermore, we consider data structures that can be improved in PCRAM model. Especially, we focus on $(a,b)$-trees. We found a way how to utilize pipeling in order to process multiple Find and Insert queries simultaneously.

I also helped out with $k$-colored Point-set Embeddability of Graphs problem. I wrote a computer program that gets an outerplanar graph on the input and it tests whether there exist a 2-coloring of the graph and a 2-colored point set such that every embedding of the graph into the point set that respects the coloring has curve complexity greater than one. The program can only process small graphs (up to 12 vertices). Unfortunetly, we have not yet found a graph that has a 2-coloring requiring curve complexity at least 2.

Week 3:

We determined lower bounds for several problems in PCRAM model using the theory of cache-aware algorithms. For example, we found a lower bound of the search query in an ordered set. Our implementation of $(a,b)$-trees achieves this bound.

Next, we examined how to speed up recursive algorithms. We used two different approaches. The first one exploited Master theorem. We managed to apply pipelined circuits in one of the three cases of the theorem (In the case with the leaf heavy recursion tree). The second approach we tried was to speed up the dynamic programming technique. We found a few algorithms that use dynamic programming where the pipelined circuits can be efficiently utilized.

Week 4:

We continued working on recursive algorithms. We realized that similiar techniques can be used to speed up exponential algorithms such as solving 3-SAT. Next, we thought about memory complexity of recursive algorithms. So far we used BFS to traverse the recursion tree. This approach leads to nice time complexity, but it is quite memory-consuming. We considered hybrid traversing strategy that begins with DFS and switchs later to BFS. This course of action slightly increases runtime, but it significantly decreases required memory.

We also thought about simulating RAMs and Turing Machines in PCRAM model.

Week 5:

We found a better way how to decrease memory complexity of recursive algorithms in PCRAM model. Instead of a hybrid of BFS and DFS we use two DFS running in parallel.

Otherwise we were writing a paper about our results.

Week 6:

We finished writing the paper and we published it on arXiv.


Back to top