General Information

Student: Adam Zheleznyak
School: University of Pennsylvania
E-mail: adam [dot] zheleznyak [at] gmail [dot] com
Project: Problems in dynamical systems

Project Description

My mentors have been working on a method to understand the dynamics of (biological) regulatory networks. DSGRN (Dynamic Signatures Generated by Regulatory Networks) is software using this method that takes a regulatory network and returns a database of the network's global dynamics over all of parameter space. In DSGRN, the regulatory networks consist of multiple species which may activate or repress each other. The impact of one species that activates another is modeled by a piecewise constant function that switches from a low expression level to a high expression level at some threshold (in the repression case the expression level goes from high to low instead).

For my project, I am looking at how the method behind DSGRN can be generalized to handle more general nonlinear dynamical systems in two dimensions. More specifically, I am looking at the case when there can be multiple expression levels and multiple thresholds. I am also looking at what new information this generalization can give us compared to the original case.

Weekly Log

Week 1:
I enjoyed meeting other students through the orientation and other means. Although some things may be lost in an online program, I have been pleasantly surprised to see the creative steps taken by my peers, my mentors, and the REU organizers to better connect and communicate.
In preparation to the first meeting with my mentors, I read [1] (under References) to learn about DSGRN. In the meeting itself, I got a better picture of the motivations behind the project, and we decided on the kind of work I will be doing this summer.
I also prepared and presented my Introductory Presentation to everyone in the REU.
Week 2:
Immediately I was struck with the question of how the notion of activation and repression can be generalized when there are multiple thresholds. I found two different ways this can be done and played with both using simple examples. This also helped me understand the general problem better. Speaking with my mentors, I found that the difference between both ways can be handled at runtime and there is no need to specifically pick one over the other.
Students doing work on DSGRN are supposed to give weekly presentations as an update to the rest of the research group. Here is my First Group Presentation.
Week 3:
I calculated some more complex examples by hand. I also read [2] and have begun to read [3]. [2] was recently submitted for publication and covers a new, faster technique for performing the parameter space decomposition of a regulatory network (with two levels of expression for each edge). This information is stored in a database and is used to create the parameter graph when a user runs DSGRN. For now, I will mainly focus on generalizing the techniques used in [2] so they can be applied to regulatory networks with more than two levels of expression per edge.
I also learned more about how the codebase for DSGRN is structured: there are quite a few moving pieces but it's amazing to see how the software can be made so efficient!
Here is my Second Group Presentation.
Week 4:
I worked more with the ideas used in [2] and have figured out how to apply the same algorithms towards the parameter space decomposition of the more generalized regulatory networks I'm working with. A lot of the specifics used in [2] do not apply to the generalized case, but the core ideas do and it actually wasn't too hard to apply those ideas directly. For example, when there are only two levels of expression, the parameter space decomposition is actually the same problem as finding certain linear extensions of a boolean lattice. However this is not true when there are more levels of expression. Instead, the paramter space decompostion involves finding linear extensions of the lattice of points on a subset of the coordinate plane ordered component-wise. This difference only changes how you would describe the problem; it doesn't have any practical implications and I should be able to create code to perform the parameter space decompostion of regulatory networks with multiple levels of expression.
Here is my Third Group Presentation.
Week 5:
This week I attended the ACM Symposium on Theory of Computing (STOC 2020), which was online of course. Most presentations were hard to follow, but I did pick up some cool mathematical ideas. This was the first conference I've attended and it was a useful experience to see what research can look like and how it is shared. I also enjoyed the Junior-Senior lunch and got some useful advice from it.
For my research, I successfully coded the parameter space decomposition. However it gets uselessly slow with almost all non-trivial inputs, so I will be working on speeding up my code.
Here is my Fourth Group Presentation.
Week 6:
I was able to make my code run much faster after finding a simple way to get rid of unneseccary calculations. There are some more calculations I'd like to make that are still too slow, however it is very likely these calculations are slow simply because the number of linear extensions grows very, very rapidly as the input increases. There is not much I can do to speed these calculations up, other than running them distributed over multiple CPUs on a server. This is something I will work on doing once I get access to such a server.
I also constructed code that will find the possible "binning representations" for a regulatory network, using the linear extensions I calculated earlier. What this basically means is, looking at each possible input for a specific node, when crossing into that input, how many output-thresholds are surpassed at that node. I did this calculation for each linear extension I've calculated, assuming there are up to 6 output-thresholds. This is not computationally hard to do, but it led me to create more than half a gigabyte of data. So there is a physical limit for how far you can do these calculations.
I also began discussing and figuring out specifically what I'd need to add/change in the DSGRN code so that it can use my data and find the dynamical signatures of regulatory networks with multiple levels of expression. This task seems like it will be quite involved but feasible.
Here is my Fifth Group Presentation.
Week 7:
I've been working on modifying DSGRN so that it can handle multiple expression levels. This has been quite difficult so far because I have not coded in C++ for many years and because working with someone elses code is more challenging than I thought it'd be. After discussing with Marcio, I better understood the constructions used in the DSGRN code. This is the most involved coding I've done but I am eager to get it to work and to see what I've done come to fruition so that we can explore the dynamics of more complicated regulatory networks.
Week 8:
This weeks I've continued to work on the code for DSGRN. I also got access to a server at Rutgers so that I can run some more complicated calculations. I've been working on my final paper and final presentation as well. I will give my presentation to the DSGRN research group and later to everyone in the DIMACS REU.
Week 9:
This was the final week of the REU. I wrote my final report and gave a short presentation on my project: Final Presentation.
I enjoyed listening to what everyone did over the summer, and overall I had a great time with the program. I plan to continue working on my project and to build upon what I've done so far.
Next week I will be giving a longer (30 minute) presentation at the CoSP REU Student Workshop where I plan to talk about my project in more detail.
Week 10:
Here is my CoSP Student Workshop Presentation.


  1. Bree Cummins, Tomas Gedeon, Shaun Harker, Konstantin Mischaikow, and Kafung Mok, Combinatorial Representation of Parameter Space for Switching Systems, SIAM Journal on Applied Dynamical Systems, 15 (2016), doi:10.1137/15M1052743. Preprint from arXiv.org.
  2. Shane Kepley, Konstantin Mischaikow, Lun Zhang, Computing linear extensions for Boolean lattices with algebraic constraints. Preprint from arXiv.org.
  3. William D. Kalies, Konstantin Mischaikow, Robert C.A.M. Vandervorst, Lattice Structures for Attractors III. Preprint from arXiv.org.
  4. Tomáš Gedeon, Bree Cummins, Shaun Harker, Konstantin Mischaikow, Identifying robust hysteresis in networks, PLoS Computational Biology, 14 (4) doi: 10.1371/journal.pcbi.1006121.

Additional Information


This work was carried out while I was a participant in the 2020 DIMACS REU program at Rutgers University, supported by NSF HDR TRIPODS award CCF-1934924.

I'd also like to acknowledge the support and guidance of Marcio Gameiro, Shane Kepley, and Konstantin Mischaikow. Thank you for making the summer great!