Martin Koutecký | DIMACS REU 2013http://reu.dimacs.rutgers.edu/~martink/2013-07-17T11:32:00-04:00The rest of the REU2013-07-17T11:32:00-04:00Martin Kouteckýtag:reu.dimacs.rutgers.edu/~martink,2013-07-17:rest-of-reu.html<!-- :author: Alexis Metaireau -->
<p>So, the REU is nearing its end for me and I have not been reporting
anything here... what's the matter?</p>
<p>The problem is at first I got very excited, to later realize that in
fact I'm stuck.</p>
<p>The past four weeks I was working on a coloring algorithm for
shrub-depth, trying to generalize the very simple and elegant ILP
solution presented by Michael Lampis in the original ND paper. I and
Vojta investigated a few approaches during our time at Princeton. I
decided to pursue one that looked promising, and for some time felt I
solved the problem. However, when I tried writing down the solution, I
found out I've only solved a part of the problem, and the easier one
at that.</p>
<p>Roughly speaking the task is the following. I need to find a
"canonical" way to describe colorings of graphs of bounded shrub-depth
such that this canonical way would be expressible as a solution to an
ILP, but it has to be succinct enough to only need <span class="math">\(f(k,d)\)</span>
variables,
for <span class="math">\(k\)</span>
the number of labels and <span class="math">\(d\)</span>
the shrub-depth. This "canonical
form" approach was successful in my thesis and other ND-related
research, but I know of no examples where it would work for
shrub-depth. We'll have to see. For now I'm in the "stuck" phase,
which just sometimes happens in research. It sucks, but it is necessary.</p>
Second & third week2013-06-22T11:32:00-04:00Martin Kouteckýtag:reu.dimacs.rutgers.edu/~martink,2013-06-22:second-third-week.html<!-- :author: Alexis Metaireau -->
<p>The second week I was still mostly reading up on the background of
what I am studying. As I mentioned <a class="reference external" href="http://reu.dimacs.rutgers.edu/~martink/first-week.html">previously</a>, there is a <a class="reference external" href="http://arxiv.org/abs/1302.4266">beautiful new result</a> by
Michail Lampis on <span class="math">\(MSO_1\)</span>
model checking lower bounds for simple
graphs (such as paths) which, as a corollary, contains the hardness
result for cliques I was looking for (basically saying <span class="math">\(MSO_2\)</span>
model
checking on cliques is not even in <a class="reference external" href="https://complexityzoo.uwaterloo.ca/Complexity_Zoo:X#xp">XP</a>).</p>
<p>The idea of the result is actually simple enough for me to sum it up
here. We want to prove that deciding some <span class="math">\(MSO_1\)</span>
-definable properties
is hard already on paths. These are just about the simplest structures
one can get -- their only distinguishable property is their
length. The complexity assumption we're using for our proof is
something a bit weaker than the usual <span class="math">\(P \neq NP\)</span>
-- this time we're
assuming that <span class="math">\(P_1 \neq NP_1\)</span>
(where <span class="math">\(P_1\)</span>
and <span class="math">\(NP_1\)</span>
are the classes
of <a class="reference external" href="https://en.wikipedia.org/wiki/Unary_language">unary languages</a>
decidable in polynomial deterministic / non-deterministic time). This
assumption is equivalent to <span class="math">\(EXP \neq NEXP\)</span>
. For example assume that
<span class="math">\(P_1 = NP_1\)</span>
-- then we can unary-code any language <span class="math">\(L \in NEXP\)</span>
with
at most exponential blow-up and since this new <span class="math">\(L_{exp}\)</span>
is unary, we
have <span class="math">\(L_{exp} \in NP_1 = P_1\)</span>
and thus <span class="math">\(L \in EXP\)</span>
.</p>
<p>What Lampis does in his paper then is to show that the rules for a
computation of a Turing machine can be encoded in a <span class="math">\(MSO_1\)</span>
sentence
and this computation can be carried out <em>in unary</em>. It was previously
known that if we allow labelled paths (e.g. every vertex is either
black or white, encoding zeros and ones), paths can be viewed as
computations. The new result in Lampis' paper is that it can also be
done in unary. To achieve this Lampis had to show the existence of
some very concise formulas to compare the length of path segments; the
described construction is interesting in itself.</p>
<p>But the theorem I was looking for is not concerned with paths
(remember that graphs with bounded neighborhood diversity have bounded
diameter); it talks about the complexity of <span class="math">\(MSO_2\)</span>
(that is, formulas
containing edge set quantifiers) model checking. But that follows from
the theorem above easily: take an arbitrarily big clique and preface
the formula with an <span class="math">\(\exists P \subseteq E, \varphi_{\text{path}}(P)\)</span>
,
where <span class="math">\(\varphi_{\text{path}}(P)\)</span>
is a FO formula assuring that <span class="math">\(P\)</span>
is a
path. Now use the previously described result.</p>
<p>Thus the answer is: yes, there is a <span class="math">\(MSO_2\)</span>
definable problem, which
is hard (not even XP) already on cliques. But is that really what we
care about? How many natural problems model the computation of a
Turing machine? This remains an open question for us: is there a
<em>natural</em> W[1]-hard problem on graphs with bounded neighborhood
diversity when only the graph is the input? Also, Lampis' result seems
to be more of an evidence that <span class="math">\(MSO\)</span>
logics are way stronger than we
need in most cases, rather than paths or cliques being complicated
graphs. Perhaps there are other logics that behave more nicely? Recent
<a class="reference external" href="http://arxiv.org/abs/1104.3057">paper by Michał Pilipczuk</a> seems to
be going in that direction (but with regard to treewidth). So we have
some cues, but no specific results yet.</p>
<p>The rest of my time was dedicated to the study of <em>shrub-depth</em>, a
parameter generalizing neighborhood diversity. A fair amount of time
was consumed merely by getting familiar and comfortable with the
definitions, which I finally can claim to be. The goal I am trying to
achieve now is to generalize the coloring algorithm for graphs of
bounded neighborhood diversity to graphs of bounded shrub-depth,
specifically (to make matters simple) of shrub-depth 2.</p>
<p>I would like to explain the motivation in a bit more detail. The
hierarchy of discussed parameters is such that <em>clique-width</em> is the
most general and <em>vertex cover</em> is the most restrictive. The coloring
problem is W[1]-hard on graphs of bounded clique-width -- but is FPT
on graphs with bounded <em>tree-width</em>, which lies under clique-width,
but is incomparable with shrub-depth and neighborhood
diversity. Shrub-depth can be seen as forming a finer hierarchy
between vertex-cover and clique-width (omitting some details). If
coloring is hard for clique-width, but easy for tree-width and
neighborhood diversity, where does shrub-depth fall in? In other
words, where do things <em>break</em> -- when we go from neighborhood
diversity to shrub-depth, or when we go from shrub-depth to
clique-width?</p>
<p>To that end I have made some observations, but again, no definitive
result yet.</p>
<p>It is worth mentioning that we spend the last three days in Princeton
visiting a Spectral Methods mini-course. I have to say that Princeton
is a beautiful place well worth seeing; the mini-course had its ups
and downs, but overall I've enjoyed it (as did the rest of our group).</p>
<p>That's it for now. Wish me luck.</p>
First week2013-06-12T20:56:00-04:00Martin Kouteckýtag:reu.dimacs.rutgers.edu/~martink,2013-06-12:first-week.html<!-- :author: Alexis Metaireau -->
<p>Most of the first week was consumed by recovering from jet-lag,
finding our way around the campus and after that all figuring out what
to focus on exactly. Out of many attractive options I selected to
follow up on the subject of my master thesis (<a class="reference external" href="/pages/master-thesis-excerpts.html">excerpts</a>, <a class="reference external" href="http://koutecky.name/mgr/mgr.pdf">the whole pdf</a>), a graph parameter called
<em>neighborhood diversity</em>.</p>
<p>For now what that means for me is getting deeper into the technicalities
of papers I knew existed but which I understood only
superficially.</p>
<p>For example, graphs with neighborhood diversity <span class="math">\(k\)</span>
only take <span class="math">\(O(k^3
log(n))\)</span>
bits to encode. What can we do with that? Well, it means that
any problem defined over them is a <a class="reference external" href="https://en.wikipedia.org/wiki/Sparse_language">sparse language</a> and <a class="reference external" href="http://blog.computationalcomplexity.org/2011/09/mahaneys-theorem.html">Mahaney's
theorem</a>
says, that if some sparse language was NP-complete, then
EXP=NEXP. What can we do with <em>that</em>?</p>
<p>Second, what are some possibly hard problems for graphs with bounded
neighborhood diversity? Courcelle, Makowski and Rotics
<a class="citation-reference" href="#courcellemr00" id="id1">[CourcelleMR00]</a> show that <span class="math">\(MSO_2\)</span>
model checking cannot be FPT
already on cliques unless EXP=NEXP (see the connection with the
above?). But the proof is very <em>sparse</em> (pun intended) and refers to old
results by Ronald Fagin <a class="citation-reference" href="#fagin75" id="id2">[Fagin75]</a> which are hard to
<em>parse</em>. Fortunately, there are some <a class="reference external" href="http://arxiv.org/abs/1302.4266">nice new results</a> <a class="citation-reference" href="#lampis13" id="id3">[Lampis13]</a> by Michail Lampis and
as a corollary the result I was looking for is also proved!</p>
<p>Third, in what ways could I extend my positive results? Perhaps there
is a parameter that generalizes neighborhood diversity...? And indeed
there is: shrub-depth. It was introduced by Ganian et
al. <a class="citation-reference" href="#ganianhnomr12" id="id4">[GanianHNOMR12]</a>, then some more work was done on it by Gajarský
and Hliněný <a class="citation-reference" href="#gajarskyh12" id="id5">[GajarskyH12]</a>. That's a lot of reading.</p>
<p>Well, I'm excited to find where this gets me, but so far I have no
definitive results I could report, just hunches I will try to chase in
the following weeks. Stay tuned.</p>
<table class="docutils citation" frame="void" id="courcellemr00" rules="none">
<colgroup><col class="label" /><col /></colgroup>
<tbody valign="top">
<tr><td class="label"><a class="fn-backref" href="#id1">[CourcelleMR00]</a></td><td>Courcelle, Makowsky, Rotics 2000: Linear Time
Solvable Optimization Problems on Graphs of Bounded
Clique-Width</td></tr>
</tbody>
</table>
<table class="docutils citation" frame="void" id="fagin75" rules="none">
<colgroup><col class="label" /><col /></colgroup>
<tbody valign="top">
<tr><td class="label"><a class="fn-backref" href="#id2">[Fagin75]</a></td><td>Fagin 1975: A spectrum hierarchy</td></tr>
</tbody>
</table>
<table class="docutils citation" frame="void" id="lampis13" rules="none">
<colgroup><col class="label" /><col /></colgroup>
<tbody valign="top">
<tr><td class="label"><a class="fn-backref" href="#id3">[Lampis13]</a></td><td>Lampis 2013: Model Checking Lower Bounds for Simple
Graphs</td></tr>
</tbody>
</table>
<table class="docutils citation" frame="void" id="ganianhnomr12" rules="none">
<colgroup><col class="label" /><col /></colgroup>
<tbody valign="top">
<tr><td class="label"><a class="fn-backref" href="#id4">[GanianHNOMR12]</a></td><td>Ganian, Hliněný, Nešetřil, de Mendez, Ramadurai
2012: When Trees Grow Low: Shrubs and Fast MSO1.</td></tr>
</tbody>
</table>
<table class="docutils citation" frame="void" id="gajarskyh12" rules="none">
<colgroup><col class="label" /><col /></colgroup>
<tbody valign="top">
<tr><td class="label"><a class="fn-backref" href="#id5">[GajarskyH12]</a></td><td>Gajarský, Hliněný 2012: Faster Deciding MSO
Properties of Trees of Fixed Height, and Some
Consequences.</td></tr>
</tbody>
</table>