Submersed Micropatterned Structures Control Active Nematic Flow, Topology and Concentration

(2021)

Authors:

Kristian Thijssen, Dimitrius Khaladj, S Ali Aghvami, Mohamed Amine Gharbi, Seth Fraden, Julia M Yeomans, Linda S Hirst, Tyler N Shendruk

Systematic strong coupling expansion for out-of-equilibrium dynamics in the Lieb-Liniger model

(2021)

Authors:

Etienne Granet, Fabian HL Essler

Integrability and braided tensor categories

Journal of Statistical Physics Springer 182:2 (2021) 43

Abstract:

Many integrable statistical mechanical models possess a fractional-spin conserved current. Such currents have been constructed by utilising quantum-group algebras and ideas from “discrete holomorphicity”. I find them naturally and much more generally using a braided tensor category, a topological structure arising in knot invariants, anyons and conformal field theory. I derive a simple constraint on the Boltzmann weights admitting a conserved current, generalising one found using quantum-group algebras. The resulting trigonometric weights are typically those of a critical integrable lattice model, so the method here gives a linear way of “Baxterising”, i.e. building a solution of the Yang-Baxter equation out of topological data. It also illuminates why many models do not admit a solution. I discuss many examples in geometric and local models, including (perhaps) a new solution.

Investigating the nature of active forces in tissues reveals how contractile cells can form extensile monolayers

Nature Materials Nature Research 20:8 (2021) 1156-1166

Authors:

Lakshmi Balasubramaniam, Amin Doostmohammadi, Thuan Beng Saw, Gautham Hari Narayana Sankara Narayana, Romain Mueller, Tien Dang, Minnah Thomas, Shafali Gupta, Surabhi Sonam, Alpha S Yap, Yusuke Toyama, René-Marc Mège, Julia M Yeomans, Benoît Ladoux

Abstract:

Actomyosin machinery endows cells with contractility at a single-cell level. However, within a monolayer, cells can be contractile or extensile based on the direction of pushing or pulling forces exerted by their neighbours or on the substrate. It has been shown that a monolayer of fibroblasts behaves as a contractile system while epithelial or neural progentior monolayers behave as an extensile system. Through a combination of cell culture experiments and in silico modelling, we reveal the mechanism behind this switch in extensile to contractile as the weakening of intercellular contacts. This switch promotes the build-up of tension at the cell–substrate interface through an increase in actin stress fibres and traction forces. This is accompanied by mechanotransductive changes in vinculin and YAP activation. We further show that contractile and extensile differences in cell activity sort cells in mixtures, uncovering a generic mechanism for pattern formation during cell competition, and morphogenesis.

Is SGD a Bayesian sampler? Well, almost

Journal of Machine Learning Research Journal of Machine Learning Research 22 (2021) 79

Authors:

Chris Mingard, Guillermo Valle-Perez, Joar Skalse, Ard A Louis

Abstract:

Deep neural networks (DNNs) generalise remarkably well in the overparameterised regime, suggesting a strong inductive bias towards functions with low generalisation error. We empirically investigate this bias by calculating, for a range of architectures and datasets, the probability PSGD(f∣S) that an overparameterised DNN, trained with stochastic gradient descent (SGD) or one of its variants, converges on a function f consistent with a training set S. We also use Gaussian processes to estimate the Bayesian posterior probability PB(f∣S) that the DNN expresses f upon random sampling of its parameters, conditioned on S. Our main findings are that PSGD(f∣S) correlates remarkably well with PB(f∣S) and that PB(f∣S) is strongly biased towards low-error and low complexity functions. These results imply that strong inductive bias in the parameter-function map (which determines PB(f∣S)), rather than a special property of SGD, is the primary explanation for why DNNs generalise so well in the overparameterised regime. While our results suggest that the Bayesian posterior PB(f∣S) is the first order determinant of PSGD(f∣S), there remain second order differences that are sensitive to hyperparameter tuning. A function probability picture, based on PSGD(f∣S) and/or PB(f∣S), can shed light on the way that variations in architecture or hyperparameter settings such as batch size, learning rate, and optimiser choice, affect DNN performance.