DIMACS
DIMACS REU 2018

General Information

me
Student: Sherry Sarkar
Office: Core Room 446
School: Georgia Institute of Technology
E-mail: sherry.sarkar@outlook.com
Project: Theoretical Properties of SAT Solvers

Project Description

Boolean satisfiability is a question of great interest in areas of theoretical computer science as well of areas of practical application. SAT is known to be NP-Complete, yet there are randomized algorithms, based on random walks on graphs, that perform much better than a brute force algorithm in solving satisfiability. An example of this is Shöning's algorithm. There has been much experimental data in why some random walks work better than others, yet there has not been much theoretical work on why this happens. The goal of my research this summer is to use probabilistic Markov Chain techniques (e.g coupling) on two practical SAT algorithms and begin a theoretical basis for comparism.


Weekly Log

Week 1:
This week I have been doing background research in primarily random walks. Specifically, I have been reading "Random Walks on Graphs : A Survey" by Lovasz. I've also been trying to gain intuition on why random walks work as effectively as they do and why certain algorithms don't perform better than others in all cases.
Week 2:

I have been continuing background research as well as begun thinking about some new problems. For my background research, I am close to finishing "Random Walks on Graphs : A Survey" and I've started reading some sources on coupling methods for Markov Chains. I've also started getting familiar with different types of Markov Chains for finding SAT assignments for a given formula.

The random walk in Shöning's algorithm works as follows : Given we are at some assignment tau, choose a clause unformly from the set of bad (unsatisfied) clauses, then choose a variable uniformly from the clause choosen, and then flip that variable to get the new assignment. We can also model a random walk based on how many clauses a variable "makes" when it is flipped : Given we are at some assignment tau, choose the variable to flip based on whichever variable yields to most number of good (satisfied) clauses. Choose uniformly from the top variables if this is a tie. We can model yet anohter random walk on instead how many clauses a variable "breaks" when it is flipped : Given we are at some assignment tau, choose the variable to flip from a distribution based on how many new clauses a variable breaks when flipped.

One of my tasks this last week was to come up with counterexample formulas where the "make" and "break" algorithm find the SAT assingment slower than Shöning's. The intuition for why these algorithms are not necessarily the best is the fact that the number of wrong clauses is not a heuristic for the Hamming distance of the current assignment and the SAT assingment.

Week 3:

The highlight of this week is that I have found some non trivial examples where Schöning's does better than break.

A lot of my work this week has been to construct and verify counterexample boolean formulas where, on average, Schöning's hits the (unique) SAT assignment in less steps than break. Since these formulas can have more than 6 variables (to make things non-trivial), I've started writing scripts in Python to verify counterexamples for me instead of checking myself by hand. The main program I wrote was to calculate hitting times from a point in the hypercube of possible assignments to the unique SAT assignment. Essentially, the program is to construct the transition matrix based on the random walk described in the algorithm and then derive and solve a system of equations. Unfortunately, both Mathematica and Python have failed to solve the hitting times of most formulas I've inputted because the matrix is "ill-conditioned", meaning that the matrix is too close to being singular for a computer to solve. This realization did come to my despair, since I had spent a decent couple of days on perfecting my program. Luckily, my program did solve hitting times for an important counterexample I had constructed, and thus gave me new insight and direction.

Ideally, this could generalize into sufficient conditions where Schöning's does better than break, however I've been struggling on where to draw the line for what exactly is sufficient. The few examples I have of sufficiency have some general patterns but it has proven a challenge to rigorously outline the sufficient criteria.

Future direction lies in beginning to use coupling and starting to consider random SAT instances.

Week 4:

This week I have begun brainstorming how to couple Markov chains described by break or Schönings. Coupling with these Markov chains is different from the usual instances of coupling I've seen in the past. I've usually seen coupling used to bound mixing times on a single chain, whereas here, we intend to use coupling to compare the hitting times of two different chains. Also, while I still have the example formulas where break does better than Schönings on average, my programs seem to reveal that break does better on average over the broad set of formulas where there is just one unique SAT assignment.

I've also begun coding some better ways to calculate hitting times. While one can be exact when calculating hitting times using a system of linear equations, this tends to be numerically instable. Thus, a simulation of the Markov chain itself might be most effective.

Week 5:

This week I have taken a very new direction focusing more on Markov chains. I am trying to come up with a general set of operations I can perform on a Markov chain while preserving mixing times. My professor and I believe that using this, coupling might be easier to apply to the chains described in break or Schönings.

Week 6:

I have successfully come up with a small set of operations I can use to modify Markov chains to preserve mixing time. This could in principle be used to modify algorithmic Markov chains like break and Schönings and compare simpler versions while still preserving important properties. This can also be used to prove whether certain chains are rapidly mixing. My next goal will be to go back to my original research question and study the effectiveness of a "look ahead" property for Schöning's algorithm. Certain deterministic algorithms for k-SAT have used look ahead algorithms and have some of the best run times known.

Week 7:

I have been focusing my attention on a new operatoin that I believe could be a new way to prove a random walk on a hypercube is rapidly mixing. I've also finished my final presentation (in the links below). I've spent some time looking at mixed algorithms of Schöning's and break as well, but that has not led to any interesting results.


Presentations


Additional Information