Evolving Heterotic Gauge Backgrounds: Genetic Algorithms versus Reinforcement Learning
The immensity of the string landscape and the difficulty of identifying solutions that match the observed features of particle physics have raised serious questions about the predictive power of string theory. Modern methods of optimisation and search can, however, significantly improve the prospects of constructing the standard model in string theory. In this paper we scrutinise a corner of the heterotic string landscape consisting of compactifications on Calabi-Yau three-folds with monad bundles and show that genetic algorithms can be successfully used to generate anomaly-free supersymmetric SO(10) GUTs with three families of fermions that have the right ingredients to accommodate the standard model. We compare this method with reinforcement learning and find that the two methods have similar efficacy but somewhat complementary characteristics.
Heterotic String Model Building with Monad Bundles and Reinforcement Learning
We use reinforcement learning as a means of constructing string compactifications with prescribed properties. Specifically, we study heterotic SO(10) GUT models on Calabi-Yau three-folds with monad bundles, in search of phenomenologically promising examples. Due to the vast number of bundles and the sparseness of viable choices, methods based on systematic scanning are not suitable for this class of models. By focusing on two specific manifolds with Picard numbers two and three, we show that reinforcement learning can be used successfully to explore monad bundles. Training can be accomplished with minimal computing resources and leads to highly efficient policy networks. They produce phenomenologically promising states for nearly 100% of episodes and within a small number of steps. In this way, hundreds of new candidate standard models are found.
Quark Mass Models and Reinforcement Learning
Journal of High Energy Physics volume 2021, Article number: 161 (2021)
In this paper, we apply reinforcement learning to particle physics model building. As an example environment, we use the space of Froggatt-Nielsen type models for quark masses. Using a basic policy-based algorithm we show that neural networks can be successfully trained to construct Froggatt-Nielsen models which are consistent with the observed quark masses and mixing. The trained policy networks lead from random to phenomenologically acceptable models for over 90% of episodes and after an average episode length of about 20 steps. We also show that the networks are capable of finding models proposed in the literature when starting at nearby configurations.