Edward's REU 2025 Web Page

About Me

Name: Edward Xiong
Email: eyxiong@mit.edu
Home Institution: Massachusetts Institute of Technology
Project: Truth Learning in a Social Network
Mentor: Jie Gao
Collaborators: William Guo

About My Project

TODO

Research Log

Week 1 (May 28-May 31)

I met with William and Professor Gao to discuss research ideas that we could explore. William and I agreed to work on sequential learning specifically, and I read some of the more important papers on the topic. William is focusing on analyzing how adversaries can impact asymptotic learning, and I'm working on further characterizing configurations which give asymptotic learning. I have been working specifically on networks described by directed acyclic graphs, or equivalently, graphs with a fixed order of signals.

Specifically, I spent some time considering how different equilibrium strategies may affect social learning. If every agent is fully rational, they can choose any strategy in the case of a tie between the two possible values of the truth; we can characterize some such strategies with a probability $p \in [0,1]$, where each agent goes with their own private signal with probability $p$. It is straightforward to construct a family of graphs which achieves asymptotic learning for any $p > 0$, but fails for $p = 0$. I believe that, given a family $\mathcal{F}$ of graphs which achieves learning for some $p \in (0, 1]$, it also achieves learning for any other value of $p \in (0, 1]$.

I also thought about how the strength of the private signals can affect learning. After reviewing the known classes of graphs which have asymptotic learning, I saw that none of these classes depended on the value of $q$, the probability that any private signal is correct. That is, all the constructed classes achieved learning for all $q > \frac[1}{2}$. Thus I conjectured that asymptotic learning is also a property independent of $q$. I spent some time working on this problem, but I didn't make too much progress.

Week 2 (June 2-June 6)

On Tuesday, June 3, every student was required to do a short presentation on their project. Our presentation was fairly straightforward, explaining the problem and our current ideas on how to proceed.

I continued working on the same questions for most of this week. I gained a better intuition for how agents perform Bayesian inference, and we found a small graph which has different behaviors for small and large values of $q$. Working with William, I was able to construct a network which exploited this behavior to achieve learning for small $q$ but not large $q$, which is a pretty counterintuitive result.

I also spent some time working to characterize graphs which achieve random truth learning. Since we were able to find networks which exhibited very "fragile" behavior, we had the idea that adding some elements of randomness might make networks more robust against such behaviors. I reviewed some of the previous literature more carefully, and I started looking at how graphs which achieve random learning behave after making small changes; for example, I think that removing a single vertex or edge from a network might not stop asymptotic learning.

Week 3 (June 9-June 13)

For most of the week, I was thinking about how learning networks behave under small changes. I was able to obtain a nice result showing that you can remove any vertex from a network, and it will still achieve learning. I have been working on other possible changes, such as adding or deleting edges, where it is easy to see that these deformations should almost always work. However, I'm having trouble finding a direction that will allow us to get stronger results. This is partially because it is difficult to describe agents which do Bayesian inference in a general way.

William also proved a new result about adversaries in arbitrary random networks. In this general topic regarding the robustness of learning networks, I think we could continue to make progress. Together with Professor Gao, we came up with some conjectures that seem interesting, such as whether removing bounded-degree vertices affects learning. I have been looking at the role of such vertices in networks that achieve learning, and I have also been trying to analyze how rational agents act.

Week 4 (June 16-June 19)

Note the shortened week because of Juneteenth (observed on June 20th). We continued to work on understanding characteristics of graphs that achieve random learning. I realized that networks have a nice property in that the learning rate of a specific agent is monotonic with respect to its position in an ordering. This is a very natural argument, and I'm honestly surprised I missed it for so many weeks. It easily allowed us to improve our bound from last week about modifying a graph, and also we were able to extend this to modifying vertices in many other ways.

The intuitive reason why this argument is so useful is because on a given network, we can compare random learning under different probability distributions of orderings. For example, if we modify the behavior of a vertex, for any other vertex we can argue that its learning rate given it arrives after the modified vertex is no worse then its learning rate given that it arrives before the modified vertex.

Thus we can obtain better bounds for a variety of cases, such as adding or removing a constant number of vertices and adding or removing a constant number of edges. While this approach still runs into the same issue of not truly analyzing the rational behavior of agents, I hope to use it to gain insight onto removing small-degree vertices.

Acknowledgements

This research is supported by NSF AI Institute ACTION IIS-2229876.