We studied a fairly new two-person game called the Nagger Mover Game. When n is prime, there exists algorithms that allow Nagger to visit n-1 seats in O(n^2) moves. We worked on improving the strategy by reducing the amount of moves required by the Nagger. Also, we worked on characterizing all the configurations in which it is not worth playing any longer since Mover can prevent Nagger from occupying new seats.
As Internet search engines continue to grow, many have developed sophisticated auction-based advertising models. Typically, potential advertisers specify a maximum daily budget and a set of keywords that they wish to bid on, and then the search engine runs an automated auction for each user query in order to determine which advertisements to display. We have examined the problem of optimizing the revenue generated by these auctions by developing an extensible tool for simulating the auctions. To this end we have incorporate support for second-price bidding, a variable number of advertising slots, and the option to choose incoming queries based on a given probability distribution. The program also implements several different heuristics for choosing advertisers. This functionality will allow us to test existing heuristics as well as quickly implement and evaluate new ones.
Post-market drug surveillance tries to find "signals" by analyzing data from large data bases composed of case reports given by various sources. The methods currently implemented are computationally intense and known to contain certain flaws. In our research we are testing new methods to determine "signaling" in a way that is free of the known flaws in the current methodology and less computationally intense. In this talk, I will try to show the strengths of our methods, as tested to date, and discuss what we feel is need to make our method stronger.
Given a set of size 2k (k>1), assign each subset of size k a color, red or blue, such that two requirements hold: (1) each k-set has the same color as its complement and (2) each k+1-set has a subset of each color. A constructice proof has been found for even k, and a probabilistic proof for odd k>8. A coloring for k=3 has been found, but k=5 and k=7 remain unsolved. My work on this problem has primarily involved the odd case.
The problem of numerically simulating solutions to the Nonlinear Schrodinger equation (or NLS, otherwise referred to as the Gross-Pitaevski eqn) has been around for many years. Several algorithms and computer programs have been created to address this problem more efficiently. In keeping with these, we continue the development of the program Kitty - a standalone package to make such simulations more accessible and easier to compute. We also look at an unresolved question in nonlinear optics for which the application of Kitty has proven to be useful.
We will begin by defining the associahedron. We will then define a moduli space. By using the cross ratio, we will show how to embed the moduli space into (S^1)^N, and thus define a topology on the moduli space by the preimage of open sets. We will show how it is possible to construct an associahedron from a moduli space. We will then present the generalization of the associahedron, the multiplihedron, as well as presenting several examples of multiplihedra we have found. Lastly, we will discuss our progress in understanding the properties of these multiplihedra.
We will construct nth degree equations over 2 by 2 matrices that have m solutions where m is anywhere between 0 and "2n choose 2." It is already known that if more solutions than that exist, then there are infinitely many solutions.
We also consider leaving n (the degree of the equation) equal to 2 and instead experimenting with the size of the coefficient matrices – we have come up with numerous examples and some theorems about the existence and number of solutions to many types of polynomial equations.
Reflections are unitary linear transformations which fix some subspace, and multiply its orthogonal complement by a factor of -1. They play an important role in Quantum Computing; for example, one of the most important applications, the Grover searching algorithm, can be described as the repeated application of two one-dimensional reflections (reflections whose fixed subspace is linear). In this talk we consider the following problem: given a black box (that is, we know what it does but not how it works) implementation of a one dimensional reflection operator, how might we recover the axis which it fixes? Since reflections are determined completely by the subspace they fix, accomplishing this gives us all the information about our black box. We will discuss how to accomplishes this.
This is joint work with Josef Cibulka, Vit Jelinek, Alexander Kazda, Bernard Lidicky, Eva Ondrackova and Martin Tancer.
The random tree process was introduced by Aldous in [Rand. Struct. Alg. 1, 1990, 383-402]. It is derived from the classical Erdos-Renyi random graph process, but the edge considered is being inserted only if the graph remains cycle-free.
Gerke, Schlatter, Steger, Taraz in [The random planar graph process, preprint] showed that the most and the least probable trees are stars and paths, respectively. The exact probability of generating a star has been already determined by Aldous; on the other hand, for a path, even the asymptotics for the probability is unknown.
In the talk we give an overview of the results and exhibit a recurrence formula for the number of permutations generating a certain path.
Given a set R of n axis-parallel rectangles in the plane, let p(R) denote their pinning number, i.e., the minimum number of pins that are needed to pin each rectangle at least once, and let α(R) denote their independence number, i. e., the maximum number of pairwise disjoint rectangles from R. Let f(α) be the maximum pinning number of all rectangle configurations whose independence is equal to α. Our main aim was to decide whether f(α)=O(α).
We have proven that if no two rectangles cross each other, then f(α)=O(α) (the same is true when the ratios of width and height of the rectangles are bounded).
Moreover, restricting to configurations such that all the vertices of the rectangles lie in the gridpoints of an α x α grid will not decrease f(α) by more than a constant multiplicative factor.
On the other hand, we have found configurations with large α for which p(R)=2α(R)-2.
The firefighter problem is a dynamic graph theory problem that models the spread of a fire, or an infectious disease. In its simplest form, a single vertex of a graph catches fire at time 0. At the beginning of each subsequent timestep t, we place f(t) firefighters on vertices of the graph. The fire is then protected, that is, forever prohibited from spreading to those vertices. After the firefighter placement, the fire spreads to all vertices that are adjacent to some vertex on fire but are not protected. The objective is to contain the fire, that is, to place firefighters in such a way that eventually all vertices on fire are adjacent only to protected vertices (and other vertices on fire).
It has been shown that any function with an average of more than 1.5 firefighters per timestep can contain the fire. At the beginning of the summer, we conjectured that there exists a periodic function f(t) with an average of 1.5 firefighters that cannot contain the fire. Our attempts to prove this led to a new conjecture, which (since it remains unproven) will be the focus of this talk.
We try to generalize the deterministic firefighter problem. Instead of each firefighter being 100% sure of containing the spread of a fire on the infinite grid, we make the change that each firefighter have probability p of being effective. During each time-step the first spreads then the ineffective firefighters are removed then we get to place firefighters on the grid. Some natural questions arise. What other models can we use? What is an optimal firefighter placement? How does the number of firefighters in each turn f(t) correlate with the probability p?
The study of social networks allows us to measure their organaizational success. Determining how knowledge impacts such success will help us define the gap between how much knowledge is enough to be successful and how much is not.
Schools are epidemiologically significant. Their role as mechanisms of infection transmission is certainly at odds with their intended educational purpose. When young children are home sick, a parent must stay home from work, imposing costs to the family as well as the educational costs. An ideal school policy would minimize infections while maximizing attendance. To that end, we created an agent-based model to test practical school policies. Individual students have personal healthcare seeking thresholds while the school mandates a globally imposed exclusion threshold. As expected, we do see that mandatory exclusion days correlate positively with average health and negatively with average number of infected students. We now need to examine the robustness of this and other policies under conditions of imperfect parental compliance, while considering average daily absences.
A coordinate transformations of a quadratic binary function can be made by introducing a tree that covers the vertex set of a graph; the nature of this covering tree might determine the usefulness of its corresponding coordinate transformation in a branch-and-bound algorithm. We seek to characterize graphs that admit a covering tree which induces short base cycles and has other potentially desirable characteristics such as low maximum degree. We examine covering trees which have base cycles of length 3, and we address the question of when a graph has a covering tree which restricts to a tree on certain substructures.
Our research was on difference in proportions hypothesis testing for small samples. We studied the power of the tests since there are flaws with the current methods used. The current methods overestimate power, so we used a new approach to give a more conservative estimate. We compared these two powers, along with the conditional power, for varied inputs that determined what test to use.
One mechanism of gene silencing is RNA interference (RNAi), and a newer variant where short interfering RNA (siRNA, 21-nucleotide-long RNA) silence gene _expression by binding to a target mRNA site. siRNA differ in functionality and it is desirable to determine the distinguishing sequence properties of functional and nonfunctional siRNA. In order to develop an accurate similarity metric we computed the nucleotide mutations that are more common among pairs of functional siRNA as well as those that are more common among pairs of functional-nonfunctional siRNA. In this talk I will describe our new sequence-similarity metric – as well as some older inaccurate metrics - and present the results obtained from applying it to 46 functional and nonfunctional siRNA sequences.
DNA microarrays are a technology used to quantify gene _expression. There are many mathematical techniques used to analyze microarray data, with Singular Value Decomposition (SVD) and Principle Components Analysis (PCA) the most common. In our research, we consider modeling the _expression data as a tensor, with the raw red and green channels as the third factor (with gene classes and experimental conditions being the others). In this preliminary study, we use the Tucker-n decomposition for decomposing the tensor to elucidate the three factors that give rise to the observed microarray data. In a preliminary study of the yeast cell cycle, the results suggest that tensor decomposition can cluster similar genes by cycle with further studies necessitated. In our second experiment, we considered the mean mRNA decay half-life by gene functional class on the genome of different strains of Escherichia coli. Our promising results show that the Tucker-n decomposition can accurately cluster genes by function. Finally, we will consider global analysis of mRNA abundance in E. coli at single-gene resolution using two-color fluorescent DNA microarrays with the Tucker-n decomposition to ensure that tensor decomposition yields the same gene clusters as the traditional preprocessing methods with less computation and fewer assumptions about the data.
RFID (Radio Frequency Identification) technology is becoming increasingly prevalent in industrial, commercial, and governmental applications. This rapid growth has spurred accompanying concern about the security of the protocols RFID tags use for authenticating themselves. Jules and Weis have proposed a modified form of the HB Protocol [1] which they call HB Plus for use as an RFID authentication protocol. This summer, we have implemented the HB and HB Plus Protocols in Java and C++ and collected data on a variety of security parameters. In our talk, we will explain the math and data analysis which lead us to parameters which we believe to be realistically applicable while still minimizing the number of false positives and false negatives.
1. A. Juels and S. Weis. Authenticating Pervasive Devices with Human Protocols. Adv. In Cryptography - Crypto 2005, LNCS vol. 3621, Springer-Verlag, pp. 293-308, 2005. Updated version available at: http://www.rsasecurity.com/rsalabs/staff/bios/ajuels/publications/pdfs/lpn.pdf
Graphs with bounded clique-width have been shown to be of interest due to the fact that they permit polynomial time solutions to several NP-hard problems. Bipartite graphs in general do not have bounded clique-width, but will we exhibit an example of a monogenic class of bipartite graphs (forbidding $K_{1,3}+e$ as an induced subgraph) with bounded clique-width. Additionally we prove that the class of $2P_3$-free graphs has unbounded clique-width. The presentation will outline these two proofs.
In this paper we show that a self-stabilizing algorithm for maximal induced matching does not exist.
Millions of people worldwide suffer from elipesy and other seizure-related medical problems. Seizures can be difficult to predict, making treatment even more difficult. Today, many patients are unresponsive to standard treatments and much about seizures remains to be understood. New methods of predicting seizures before they happen would not only enable more effective treatment but could lead to a better understanding of the dynamics that underlie seizures.
Data mining focuses on methods of extracting useful data and patterns from large quantities of information. There has been a great interest recently in SVMs, which use geometrical representations to classify data. Once trained on a known set of data, SVMs can be used to clasify new data.
My project focuses on how SVMs might be used to predict seizures. I'll talk about different methods of sampling data, different "kernels" to project that data into geometrical spaces, and the success rates of SVMs in clasifying that data.On this basis, I employed two tools - wavelet transform and entropy - in order to help determine if any fundamental, underlying differences exist between the normal and epileptic brain. I will discuss how the aforementioned functions work as models, why they were applicable in this situation, and lastly, my research findings given their usage.
Epilepsy is a condition affecting over 40 million people worldwide, and is characterized by repeated, (seemingly) unpredictable seizure activity. Much research has been done in the way of predicting seizures, but not similarly for screening. Given the fact this condition can manifest at any age (and may present safety issues), it would be convenient for the medical community to be able to diagnose epilepsy before the physical symptoms ever arise. On this basis, I employed two tools - wavelet transform and entropy - in order to help determine if any fundamental, underlying differences exist between the normal and epileptic brain. I will discuss how the aforementioned functions work as models, why they were applicable in this situation, and lastly, my research findings given their usage.