Thank you to my advisor Xintong Wang for her guidance and support throughout this project.
This research was conducted at the 2025 DIMACS REU program, supported by NSF grant CCF-2447342.
I spent this week learning about learning about prediction markets and automated market makers. After meeting with my advisor we settled on the problem of differentially allocating liquidity in a LMSR market. This is building off of a previous paper by her which examined multiresolution interval markets.
This week we decided to study a specific design of the market similar to the multiresolution market design. It is essentially a nested version of the LMSR market where in this case we have submarkets with their own liquidities where they are each assigned their own weight that changes based on the amount of shares bought in each submarket. We came to this design after an earlier idea of setting a constant weight to each market did not work out as it would force the prices in each submarket to sum to a constant which would not work since this cannot represent all probability distributions. Then I worked on proving that the desirable properties of a prediction market are still upheld in this design, including bounded loss, no arbitrage, path independence and being able to assign consistent prices to all the outcomes/securities.
This market has the property that we essentially have 2 dependent submarkets that automatically correct for arbitrage while still allowing for price discovery.
This week I did some background reading on the simulation model my advisor had used in a previous paper. Here, the traders have noisy access to the true signal and their beliefs follow a beta distribution. And, they have an exponential utility function. This simple model is useful because it allows us to solve for the market-clearing equilibrium, which is the price vector that would arise if the traders only traded among themselves. We can then compare this to the equilibrium reached by an automated market maker.
I then implemented this setting to check how our market compared to the standard LMSR. I ran multiple simulations, with the goal of seeing how for the same worst case loss, different choices for the outer and inner liquidities would affect the convergence and the volatility. For each trade, I graphed the KL divergence between the market-clearing equilibrium and the market prices. It seems that for many choices of the liquidities, this new market will perform worse as it can be very volatile or converge too slowly.
I continued to run simulations, focusing on how choosing different sized and number of intervals can change the performance. I met with my advisor and we discussed possible settings where this market could be used. One idea was a market where the liquidity provider's liquidity depends mostly on the belief elicitation of a subset of the outcomes and not the whole space.
This week we met with Fang-Yi Yu from George Mason University who was interested in this market design. Some directions we were interested in exploring is figuring out the behavior in the limiting cases of the ratio of the inner liquidity and the outer liquidity and measuring the robustness of the market with respect to the baseline LMSR with equal worst case loss.
Instead of measuring loss with respect to the KL divergence, I began using the mean squared error. This allows us to focus only on accuracy of predictions on a subset of the outcomes. For instance, we can try to adjust the parameters to elicit accurate price distributions on only the first half of the outcomes which could lower the worst case loss.
We also found that in the case where we split the outcomes into N^1/2 intervals and take the ratio of the inner and outer liquidities to 0 or infinity, the cost function reduces nicely to the regular LMSR with a different liquidity.
Define a K-Wu to be K rays originating at a common point. Determine the maximum number of regions the plane can be split into with n K-Wus. This is a generalization of the Wu problem given by Neil Sloane in his talk.
You are given a standard chessboard with a white queen and a black king. Give a starting configuration of the board and a not necessarily deterministic procedure which generates an infinite sequence of premoves for the queen that satisfies the following condition: Even if the king is aware of your procedure, they cannot make an infinite sequence of valid premoves with non-zero probability and at no point are they able to capture the queen.
Here’s something I cooked recently: