Consider a network of agents that all want to guess the correct value of some ground truth state. In a sequential order, each agent makes its decision using a single private signal which has a constant probability of error, as well as observations of actions from its network neighbors earlier in the order. We are interested in enabling network-wide asymptotic truth learning - that in a network of n agents, almost all agents make a correct prediction with probability approaching 1 as n goes to infinity. This project concerns how random orderings and carefully crafted decision orders can be made for specific graph topologies to achieve this end. We are also interested in the adversarial robustness of particular graphs, namely how agents can still achieve asymptotic truth learning in the presence of adversarial agents that sway the network's beliefs away from the ground truth.
This research was done at the 2025 DIMACS REU program, supported by the [insert funding source].
Prior to the program starting, I did some background reading on asymptotic truth learning in social networks. Most notably, the work by Lu et al. from a previous iteration of this REU noted that ATL can be achieved so long as there exists an agent adjacent to \omega(1) independent neighbors (or technically any agent obtaining a high quality signal) that can then propogate the signal to the rest of the network. Bahar et al. analyzes a uniformly random decision ordering, under which a "celebrity graph" (bipartite graph with vastly uneven groups) can obtain ATL. In essence, the celebrities act as opinion leaders which will both obtain a high quality signal and propogate it to the rest of the network. Arieli et al. extended this by noting any graph in which each vertex has nearly-independent neighbors, known as the local learning requirement, can achieve ATl under a uniformly random decision ordering. I also briefly read up on the repeated learning settings, notably Mossel et al.
[citations to be added later]
Entering this week, I was particularly interested in adversarial robustness of different topologies and non-Bayesian decision algorithms (weighted majority voting). I met with Prof. Jie Gao and Edward Xiong in-person for the first time, in which we discussed potential directions to take the project. I was convinced that finding weights for the weighted majority voting algorithm is equivalent to computing the CDF of a generalized Bernoulli distribution, which may be difficult to do efficiently. We then discussed pivoting to an entirely different setting, the naming game. As of writing this, I think we can push the current topic at least a little further, so I have started focused my efforts on robustness by looking at the effects of adversaries in the repeated learning setting.
There are different ways "robustness" can be defined, as seen in Bohren 2014 and Frick et al 2020, though the most similar to our setting that I read this week is perhaps Mueller-Frank 2018 in which at each iteration, an adversary can make an \epsilon-adjustment to the belief of a single agent (i.e. if the agent would have broadcasted opinion a, the adversary can change it to value in [a \pm \epsilon]). Interestingly, for any \epsilon > 0, such an adversary influencing an arbitrary agent can eventually sway a network using a DeGroot learning system to converge to any arbitrary belief, so DeGroot learning is not robust. This suggests repeated learning is surprisingly vulnerable to adversaries, though I'd believe that the adversary's power is significantly diminished in the sequential learning setting. Next week, I'll look into the effects of the graph topology on the robustness of Bayesian learners.
[more citations: Mossel et al, naming game, non-Bayesian voting algs]