Resonant inelastic x-ray scattering in warm-dense Fe compounds beyond the SASE FEL resolution limit
(2024)
Laboratory realization of relativistic pair-plasma beams
(2024)
Control of autoresonant plasma beat-wave wakefield excitation
Physical Review Research 6:1 (2024)
Abstract:
Autoresonant phase locking of the plasma wakefield to the beat frequency of two driving lasers offers advantages over conventional wakefield acceleration methods, since it requires less demanding laser parameters and is robust to variations in the target plasma density. Here, we investigate the kinetic and nonlinear processes that come into play during autoresonant plasma beat-wave acceleration of electrons, their impact on the field amplitude of the accelerating structure, and on acceleration efficiency. Particle-in-cell simulations show that the process depends on the plasma density in a nontrivial way but can be reliably modeled under specific conditions. Beside recovering previous fluid results in the deeply underdense plasma limit, we demonstrate that robust field excitation can be achieved within a fully kinetic self-consistent modeling. By adjusting the laser properties, we can amplify the electric field to the desired level, up to wave breaking, and efficiently accelerate particles; we provide suggestions for optimized laser and plasma parameters. This versatile and efficient acceleration scheme, producing electrons from tens to hundreds of MeV energies, holds promise for a wide range of applications in research industry and medicine.PARALLELIZING NON-LINEAR SEQUENTIAL MODELS OVER THE SEQUENCE LENGTH
12th International Conference on Learning Representations, ICLR 2024 (2024)
Abstract:
Sequential models, such as Recurrent Neural Networks and Neural Ordinary Differential Equations, have long suffered from slow training due to their inherent sequential nature. For many years this bottleneck has persisted, as many thought sequential models could not be parallelized. We challenge this long-held belief with our parallel algorithm that accelerates GPU evaluation of sequential models by up to 3 orders of magnitude faster without compromising output accuracy. The algorithm does not need any special structure in the sequential models' architecture, making it applicable to a wide range of architectures. Using our method, training sequential models can be more than 10 times faster than the common sequential method without any meaningful difference in the training results. Leveraging this accelerated training, we discovered the efficacy of the Gated Recurrent Unit in a long time series classification problem with 17k time samples. By overcoming the training bottleneck, our work serves as the first step to unlock the potential of non-linear sequential models for long sequence problems.Efficient prediction of attosecond two-colour pulses from an X-ray free-electron laser with machine learning
ArXiv 2311.14751 (2023)