Controlling DNA–RNA strand displacement kinetics with base distribution
Proceedings of the National Academy of Sciences National Academy of Sciences 122:23 (2025) e2416988122
Abstract:
DNA–RNA hybrid strand displacement underpins the function of many natural and engineered systems. Understanding and controlling factors affecting DNA–RNA strand displacement reactions is necessary to enable control of processes such as CRISPR-Cas9 gene editing. By combining multiscale modeling with strand displacement experiments, we show that the distribution of bases within the displacement domain has a very strong effect on reaction kinetics, a feature unique to DNA–RNA hybrid strand displacement. Merely by redistributing bases within a displacement domain of fixed base composition, we are able to design sequences whose reaction rates span more than four orders of magnitude. We extensively characterize this effect in reactions involving the invasion of dsDNA by an RNA strand, as well as the invasion of a hybrid duplex by a DNA strand. In all-DNA strand displacement reactions, we find a predictable but relatively weak sequence dependence, confirming that DNA–RNA strand displacement permits far more thermodynamic and kinetic control than its all-DNA counterpart. We show that oxNA, a recently introduced coarse-grained model of DNA–RNA hybrids, can reproduce trends in experimentally observed reaction rates. We also develop a simple kinetic model for predicting strand displacement rates. On the basis of these results, we argue that base distribution effects may play an important role in natural R-loop formation and in the function of the guide RNAs that direct CRISPR-Cas systems.Characterising the Inductive Biases of Neural Networks on Boolean Data
(2025)
Deep neural networks have an inbuilt Occam’s razor
Nature Communications Nature Research 16:1 (2025) 220
Abstract:
The remarkable performance of overparameterized deep neural networks (DNNs) must arise from an interplay between network architecture, training algorithms, and structure in the data. To disentangle these three components for supervised learning, we apply a Bayesian picture based on the functions expressed by a DNN. The prior over functions is determined by the network architecture, which we vary by exploiting a transition between ordered and chaotic regimes. For Boolean function classification, we approximate the likelihood using the error spectrum of functions on data. Combining this with the prior yields an accurate prediction for the posterior, measured for DNNs trained with stochastic gradient descent. This analysis shows that structured data, together with a specific Occam’s razor-like inductive bias towards (Kolmogorov) simple functions that exactly counteracts the exponential growth of the number of functions with complexity, is a key to the success of DNNs.Visualising Feature Learning in Deep Neural Networks by Diagonalizing the Forward Feature Map
(2024)
An exactly solvable model for emergence and scaling laws in the multitask sparse parity problem
Advances in Neural Information Processing Systems 37 (NeurIPS 2024) Curran Associates 37 (2024)