Mentor: Farid Alizadeh, RUTCOR
|subject to||Ax=b and x non-negative|
The project I would like to propose to look for models and applications of SDP in various areas. Already there are significant applications in combinatorial optimization and graph theory. I like to explore applications in probability theory, statistics and financial engineering. Here is a sample of specific problems:
a) Suppose you are asked to design a random walk with certain rules, and you wish to maximize the rate of convergence. What is the best random walk that you can come up with?
b) You are given some information about an unknown probability distribution, for example its first n moments, and some other restrictions on the distribution. You wish to choose a distribution that satisfies these restrictions and is best in some sense, for instance is as "symmetric as possible" or its median is as close to a particular number as possible or some other criterion.
c) You are looking to approximate a function of several variables
y=f(x_1,...,x_n). You are given a "sample" of observations:
For x_11, x_12, ..., x_1n you observed y_1
For x_21, x_22, ..., x_2n you observed y_2
For x_m1, x_m2, ..., x_mn you observed y_m
The traditional regression analysis (or in general approximation theory) asks you to find the best function f from a given class of functions (say polynomials, exponential, linear etc.) that fits these observations as closely as possible. We like to study this problem with added twist that the function f is convex. This under certain circumstances may yield an SDP problem.
The prospective student should be familiar with linear programming, linear algebra and introductory probability. (S)he should also be willing to do some simple programming, mainly using a very high level programming language such as Matlab, or Matlab like language and write scripts to model a problem and feed it to a SDP solver (which is already written.)
Mentor: Endre Boros, RUTCOR
This project will aim at developing a method of constructing such majorants, and perhaps even to characterize those for small functions, depending on 3,4 or 5 variables.
Mentor: Dr. Vladimir Gurvich, RUTCOR
Given n finite sets S_1,...,S_n , let us define a graph G with n vertices v_1,...,v_n in which 2 vertices v_i and v_j are connected by an edge IFF two differences S_i \ S_j and S_j \ S_i have at least k elements each. EVERY graph can be realized in such a way if k is arbitrarily large. But is it still true if k is bounded by a constant ? Even for k=2 neither a proof nor a counter-example is known. If k=1 then only co-occurrence graphs can be realized
Mentor: Dr. Vladimir Gurvich, RUTCOR
Mentor: Dr. Mel Janowitz, DIMACS
From the viewpoint of graph theory, the input is a complete graph on which there is specified an edge-length function. Cluster algorithms may be viewed as transformations of this edge length function into a new function that somehow gives more information on the structure of the graph. The idea is to divide the graph into subgraphs in such a way that distinct subgraphs are separated, and each given subgraph is highly connected. The problem is complicated by the fact that any increase in internal connectivity often forces a decrease in separation, so this becomes an interesting optimization problem.
The idea of the project is to investigate cluster analysis from this graph theoretic vantage point, and to formulate cluster algorithms from the viewpoint of graph theoretic concepts such as connectivity, separation and bridges. A bridge of a symmetric graph is an edge whose removal causes the graph to have an extra connected component. There are a number of unpublished and largely unexplored algorithms based on bridges. The idea of the project will be to understand these algorithms, investigate their properties, and possibly start implementing them with an appropriate programming language.
The project will also examine past graph theoretic models for cluster analysis, and attempt to create a model that will place cluster analysis within the realm of graph theoretic concepts that have arisen for other reasons. Once this is done, there will hopefully be feedback between the two subjects.
No prior knowledge of cluster analysis is assumed, and a combinatorial approach will be used. The exact nature of the project will be negotiated and can take a number of directions.
Mentor: Dr. Bahman Kalantari, Computer Science
The problem of testing the existence of lambda-magic labeling or computing the minimum-weight lambda-magic labeling, as well as constrained versions of these problems include some of the most celebrated problems in polyhedral combinatorics and combinatorial optimization, e.g. the perfect matching problem (1-magic labeling) and its weighted version; the prefect 2-matching problem (2-magic labeling) and its weighed or capacitated versions; as well as the Hamiltonian cycle problem and its weighted version, the traveling salesman problem (TSP) which can be viewed as the minimum-weight 2-magic labeling where the set of positively labeled edges form a connected subgraph of G.
Although the problems of computing the minimum magic labeling and the minimum-weight lambda-magic labeling are solvable in polynomial-time, most constrained cases (e.g. Hamiltonian cycle problem) are NP-hard. More generally, magic labeling problems can be defined with respect to hypergraphs in which case even the problem of testing the existence of a lambda-magic labeling is NP-complete. Indeed for some very special hypergraphs, the existence of some lambda-magic labelings remain to be the most central problem in combinatorial design theory, difficult to establish theoretically or computationally. Despite the deceptive simplicity in the description of magic labeling problems, it is not surprising to imagine that an enormous set of challenging problems can be stated. Even in the case of graphs, some constrained versions of magic labeling problems give a very challenging set of problems that at the very least can be used to assess the strengths and weaknesses of general integer programming methods, as well as codes.
The purpose of this research project is to discover new findings with regard to this fundamental problem. We wish to assess the effectiveness of using magic labeling problems in solving some hard combinatorial problems, in particular with respect to TSP or its variants. Assume that G is complete, edge-weighted with weights satisfying the triangle inequality, and n even. For i=1,2,3, let W_i denote the weight of the minimum-weight i-magic labeling where each edge label is at most two (i.e. capacitated for i=3). Let T_* denote the weight of the minimum-weight TSP tour. It is well-known and quite easy to show that 2W_1 and W_2 are lower bounds to T_*. It can be shown that (2/3)W_3 is also a lower bound to T_* (see Kalantari and Khosrovshahi, Magic Labeling in Graphs: Bounds, Complexity, and an Application to a Variant of TSP, Networks, 28, 1996, 211-219). It can also be shown that W_3 can be used to give a 2-approximate solution to a difficult variant of TSP.
The question is how good is the new lower bound and how can it be incorporated within Branch-and-Bound or Branch-and-Cut algorithms for TSP or its variants? It is hoped that these questions can be answered at least computationally with respect to some previously defined set of test problems for TSP, and using the existing codes.
Mentor: Wilma Olson, Chemistry (Computational Chemistry/Biology)
To understand how the genetic code is translated into three-dimensional structures used in biological processes, our group is analyzing the known geometry of DNA bases in chemical structures and developing mathematical models to incorporate the sequence-dependent bending, twisting, and translation of known fragments in DNA molecules of varying lengths and chemical composition.
Mentor: Mario Szegedy, Computer Science
Mentor: Mahesh Viswanathan, DIMACS
One popular algorithmic technique, called Model Checking, is concerned with determining whether a formal model of a system, often presented as a graph of states and state transitions, satisfies certain correctness properties. The method works by systematically stepping through the global states of the system while checking various properties at each stage. However, such a method runs into the well-known problem of "state explosion" which severely limits the size of the systems that can be analyzed. In more detail, in contrast to the compact representation of system descriptions using variables, the explicit representation of the state space (on which model checking performs the search) includes the enumeration of all possible values for the variables which is prohibitively large.
One approach that has been found to be effective is compositional model checking. It is a divide and conquer paradigm where one verifies components of the software for local correctness and then deduces the global correctness of the system from the local properties of the components. The project will involve extending existing techniques for compositional model checking and experimenting with examples in some well known model checker. Background in logic and algebra will be useful, though not crucial.