Skip to main content
Home
Department Of Physics text logo
  • Research
    • Our research
    • Our research groups
    • Our research in action
    • Research funding support
    • Summer internships for undergraduates
  • Study
    • Undergraduates
    • Postgraduates
  • Engage
    • For alumni
    • For business
    • For schools
    • For the public
Menu
Theoretical physicists working at a blackboard collaboration pod in the Beecroft building.
Credit: Jack Hobhouse

Ard Louis

Professor of Theoretical Physics

Research theme

  • Biological physics

Sub department

  • Rudolf Peierls Centre for Theoretical Physics

Research groups

  • Condensed Matter Theory
ard.louis@physics.ox.ac.uk
Louis Research Group members
Louis Research Group
  • About
  • Research
  • Publications on arXiv/bioRxiv
  • Publications

Double-descent curves in neural networks: a new perspective using Gaussian processes

Proceedings of the AAAI Conference on Artificial Intelligence Association for the Advancement of Artificial Intelligence 38:10 (2024) 11856-11864

Authors:

Ouns El Harzli, Bernardo Cuenca Grau, Guillermo Valle-Pérez, Adriaan A Louis

Abstract:

Double-descent curves in neural networks describe the phenomenon that the generalisation error initially descends with increasing parameters, then grows after reaching an optimal number of parameters which is less than the number of data points, but then descends again in the overparameterized regime. In this paper, we use techniques from random matrix theory to characterize the spectral distribution of the empirical feature covariance matrix as a width-dependent perturbation of the spectrum of the neural network Gaussian process (NNGP) kernel, thus establishing a novel connection between the NNGP literature and the random matrix theory literature in the context of neural networks. Our analytical expressions allow us to explore the generalisation behavior of the corresponding kernel and GP regression. Furthermore, they offer a new interpretation of double-descent in terms of the discrepancy between the width-dependent empirical kernel and the width-independent NNGP kernel.
More details from the publisher
Details from ORA
More details

Coarse-grained modeling of DNA–RNA hybrids

Journal of Chemical Physics American Institute of Physics 160:11 (2024) 115101

Authors:

Eryk Ratajczyk, Petr Sulc, Andrew Turberfield, Jonathan Doye, Ard A Louis

Abstract:

We introduce oxNA, a new model for the simulation of DNA–RNA hybrids that is based on two previously developed coarse-grained models—oxDNA and oxRNA. The model naturally reproduces the physical properties of hybrid duplexes, including their structure, persistence length, and force-extension characteristics. By parameterizing the DNA–RNA hydrogen bonding interaction, we fit the model’s thermodynamic properties to experimental data using both average-sequence and sequence-dependent parameters. To demonstrate the model’s applicability, we provide three examples of its use—calculating the free energy profiles of hybrid strand displacement reactions, studying the resolution of a short R-loop, and simulating RNA-scaffolded wireframe origami.
More details from the publisher
Details from ORA

Exploring simplicity bias in 1D dynamical systems

(2024)

Authors:

Kamaludin Dingle, Mohammad Alaskandarani, Boumediene Hamzi, Ard A Louis
More details from the publisher

An exactly solvable model for emergence and scaling laws in the multitask sparse parity problem

Advances in Neural Information Processing Systems 37 (2024)

Authors:

Y Nam, N Fonseca, Sh Lee, C Mingard, Aa Louis

Abstract:

Deep learning models can exhibit what appears to be a sudden ability to solve a new problem as training time, training data, or model size increases, a phenomenon known as emergence. In this paper, we present a framework where each new ability (a skill) is represented as a basis function. We solve a simple multi-linear model in this skill-basis, finding analytic expressions for the emergence of new skills, as well as for scaling laws of the loss with training time, data size, model size, and optimal compute. We compare our detailed calculations to direct simulations of a two-layer neural network trained on multitask sparse parity, where the tasks in the dataset are distributed according to a power-law. Our simple model captures, using a single fit parameter, the sigmoidal emergence of multiple new skills as training time, data size or model size increases in the neural network.

Controlling DNA-RNA strand displacement kinetics with base distribution

(2024)

Authors:

Eryk Ratajczyk, Jonathan Bath, Petr Šulc, Jonathan PK Doye, Ard Louis, Andrew Turberfield
More details from the publisher

Pagination

  • First page First
  • Previous page Prev
  • Page 1
  • Page 2
  • Current page 3
  • Page 4
  • Page 5
  • Page 6
  • Page 7
  • Page 8
  • Page 9
  • …
  • Next page Next
  • Last page Last

Footer Menu

  • Contact us
  • Giving to the Dept of Physics
  • Work with us
  • Media

User account menu

  • Log in

Follow us

FIND US

Clarendon Laboratory,

Parks Road,

Oxford,

OX1 3PU

CONTACT US

Tel: +44(0)1865272200

University of Oxfrod logo Department Of Physics text logo
IOP Juno Champion logo Athena Swan Silver Award logo

© University of Oxford - Department of Physics

Cookies | Privacy policy | Accessibility statement

Built by: Versantus

  • Home
  • Research
  • Study
  • Engage
  • Our people
  • News & Comment
  • Events
  • Our facilities & services
  • About us
  • Current students
  • Staff intranet