Davey FitzPatrick's REU 2020 Web Page

About Me

Name: Davey FitzPatrick
Email: davidbf@princeton.edu
Home Institution: Princeton University
Project: Three Color Potts Model
Mentor: Professor Bhargav Narayanan

About My Project

Consider a graph $G$ and a fixed sequence of vertices $v_1, ..., v_n$ in $G$, and fix a constant $C > 1$. Start with some coloring $\omega$ of the vertices of $G$ with three colors (red, blue, green) and perform the following procedure: update the color of $v_1$ to red, blue, or green with probabilities proportional to $C^a, C^b, C^c$, respectively, where $a, b, c$ are its numbers of red, blue, green neighbors (so a color is more likely if it is more popular among the neighbors, but every color has nonzero probability, because $C^0 = 1$). Then do the same thing for $v_2$ and so on up to $v_n$. Let $\text{Pr}_{\omega}$ be the probability that all vertices will be colored blue at the end of this process. What choice of $\omega$ maximizes $\text{Pr}_{\omega}$? We might think it obvious that it's the all blue coloring, but this is not known! When there are only two colors, red and blue, we do know that, since every time we modify $\omega$ by changing a red vertex to a blue vertex, we can see by induction that $\Pr_{\omega}$ can only be improved (call this "monotonicity"). It's also trivial when all the vertices in the sequence are distinct. But when the vertices repeat and we have three colors, we might plausibly want to make some vertices in $\omega$ red and green to hedge against large clusters of red or green that could occur later, on the basis that $C^b + C^c$ is generally much smaller than $C^{b + c}$. Some easy results in this direction are that monotonicity fails with three colors and that all blue need not be best if instead of using the function $C^x$ for the updates we use a general positive increasing function $F_{v_i}$ for each update. So what assumptions on the update functions do ensure that all blue is best, and how can we prove anything without appealing to monotonicity?

Research Log

Week 1: May 25 - May 29

I met Professor Narayanan and learned of an approach that involved building a certain class of functions on the color space and computing their expectations as rational functions of certain parameters. If we can show that these rational functions are always positive when given appropriate inputs, then we can show that the all blue coloring is always the optimal initial coloring for the end states that we care about. We are trying to show this not just for an exponential updating function but for any family of convex functions. After clarifying a few aspects of this approach, I studied some basic Mathematica commands and started trying to understand Professor Narayanan's code. I also thought a bit more about the coin problem, in case I ended up switching to that.

Week 2: June 1 - June 5

I gave an initial presentation on the Potts problem. I also thought unsuccessfully about approaches to the coin problem and eventually decided to work on the Potts problem along with some other students. I studied Professor Narayanan's code but was not yet able to understand exactly what it was doing.

Week 3: June 8 - June 12

I figured out the details of the existing code. It turns out that in Mathematica, something like ", {x, Set}" means "Do the previous thing for all x in Set," and that explained pretty much everything.

Week 4: June 15 - June 19

I investigated the difference between two approaches (a) applying multi-step updates to the one expression we actually care about preserving under updates (F[all blue] - F[other state]) and verifying that the coefficients (linear combinations of values of F) are positive for the choices of F that we care about and (b) applying one-step updates to the expression we care about, verifing that the coeffieicnts are positive for the F that we care about, and then applying one-step uppdates to those coefficients and repeating. (a) was the approach that I had originally discussed, but the code turned out to be doing (b). I eventually figured out that they are essentially equivalent but (b) is slightly stronger, because whenver we lengthen a multi-step update by a single update and apply the lengthened update to an expression, all the resulting coefficients are positve linear combinations of coefficients from one-step updates of the coefficients in the unlengthened update of the expression.

Week 5: June 22 - June 26

I have been trying to analyze the known counterexamples found by Professor Narayanan and the graduate student whom we are working with, Quentin Dubroff (where the update functions are increasing but not convex). The graph is very simple, but the final update does not seem to depend on the preceding updates, which is very strange. In addition, while my adviser does have a finite stable set of conditions on the function F for three vertices and convex update functions, the list is very long, so I have been trying to shorten it by adding an extra update function. Unless I have missed a contradiction, I have been able to eliminate six conditions (or eight, but only by adding two new ones, which I got from averaging each of the conditions for two vertices and arbitrary increasing functions over the possible color choices for the third vertex). Hopefully I can eliminate more.

Week 6: June 29 - July 3

After clarifying my confusion with the code containing the known counterexample, I analyzed this in detail. I eliminated all the huge constants in the update functions and ended up with functions that involved only small integers. I was also able to limit the non-convexity to exactly one place (i.e., one specific second difference of one of the update functions is negative, while all others are non-negative), but this non-convexity seems difficult to eliminate. I also discovered exactly what the two main steps in the counterexample are doing: the first one is very sensitive to the color of the first vertex in the case where the second vertex is green or red, but very insensitive when the second vertex is blue; thus it allows us to focus on the case where the second vertex is green or red and essentially disregard the other case. The second step is the one where non-convexity comes in: it allows us to create a threshhold, so that, e.g., adding more blue vertices past that threshhold does not increase the probability of the updated vertex turning either blue or red but, if the number of red vertices is below the threshhold, then adding more red vertices may do so. This is what seems difficult to extend to the convex setting, but Professor Narayanan thinks there is hope. The idea would be to stitch together some version of the first update with interesting pieces from the other counterexamples that my group members have been studying to make something more complex.

Week 7: July 6 - July 10

This week started out fairly slow. After verifying that the counterexample from last week can be straightforwardly turned into a counterexample on an undirected graph, with only one additional vertex, I thought about whether this could be done for a general counterexample on a digraph, and I found a counterexample to that. Then, I found a fiddly way to adapt the digraph counterexample to use a single updating function (a question that my group members had been considering) at every step but was not able to extend it to the graph counterexample. Finally, on Friday, I had one random idea for how to get around the obstacle to a convex counterexample that I mentioned last week: introduce a set of vertices that are all the same random color, and modify the third update so that only that set can reach the threshhold. It turned out that the color of the one vertex that differs between the two starting states can only affect whether this threshhold is reached if it is not blue, so the counterexample still worked! This meant that the whole approach involving coefficient-chasing in Mathematica was doomed to fail for higher numbers of vertices, which I was happy about, because I found that to be tedious, but I was also somewhat disappointed because of the time we had invested.

Week 8: July 13 - July 17

I spent this week trying to adapt the convex counterexample to use a single update function. I was first able to do this on a digraph and then on a graph. Our goal is to eventually adapt the counterexample to use an exponential function. But this seems very difficult because exponential threshholds do not have sharp threshholds (places where the ratio between the values of the function at adjacent inputs changes sharply).

Week 9: July 20 - July 24

This week, we worked on our final presentation and paper. I also worked out an approach suggested by Professor Narayanan for how to eliminate the need for threshholds. While this fixed all the obstacles to getting an exponential counterexample that we had known about before, I also realized that the randomization steps in the counterexample were hard to implement on a graph, and I was not able to do that completely rigorously.

Week 10: July 27 - July 31

Although the program officially finished last Friday, I was still able to give a longer-form presentation this week at an optional poster session, which was a fun opportunity. Professor Narayanan thinks that the last bit of the proof of the exponential counterexample should be able to be made rigorous, but I am still stuck. Not only do I have to show that a Markov chain on three states with transition probabilities very close to the identity matrix has stationary state that is very close to the uniform distribution, which I found a useful reference for, but I have to show that the transition probabilities are very close to the identity matrix for the Markov chain I have, which will involve annnoying analysis of a binomial distribution. I have also tried other approaches to randomization that rely on bootstrapping rather than ergodicity, but they have their own issues. Finally, we have started to discuss whether we can say anything interesting about the problem on specific graphs, like the complete graph. I expect to continue working on this problem with my adviser for a while after the program, and hopefully we will end up with a paper!

References & Links

Lubetzky, Eyal. “Dynamics for the critical 2D Potts/FK model: many questions and a few answers.” Charles River Lecture Series, Oct 2018. Link

Funding

We were supported by the Rutgers Math Department; NSF grant DMS-1800521; and CoSP, a project funded by European Union’s Horizon 2020 research and innovation programme, grant agreement No. 823748. This work was done as part of the DIMACS REU 2020 program.