First measurement of inclusive muon neutrino charged current differential cross sections on argon at Eν∼0.8 GeV with the MicroBooNE detector
Physical Review Letters American Physical Society 123:13 (2019) 131801
Abstract:
We report the first measurement of the double-differential and total muon-neutrino charged-current inclusive cross sections on argon at a mean neutrino energy of 0.8 GeV. Data were collected using the MicroBooNE liquid argon time projection chamber located in the Fermilab Booster neutrino beam, and correspond to $1.6 \times 10^{20}$ protons on target of exposure. The measured differential cross sections are presented as a function of muon momentum, using multiple Coulomb scattering as a momentum measurement technique, and the muon angle with respect to the beam direction. We compare the measured cross sections to multiple neutrino event generators and find better agreement with those containing more complete physics at low $Q^2$. The total flux integrated cross section is measured to be $0.693 \pm 0.010 \, (\text{stat.}) \pm 0.165 \, (\text{syst.}) \times 10^{-38} \, \text{cm}^{2}$.A prototype for the evolution of ATLAS EventIndex based on Apache Kudu storage
EPJ Web of Conferences EDP Sciences 214 (2019)
Abstract:
The ATLAS EventIndex has been in operation since the beginning of LHC Run 2 in 2015. Like all software projects, its components have been constantly evolving and improving in performance. The main data store in Hadoop, based on MapFiles and HBase, can work for the rest of Run 2 but new solutions are explored for the future. Kudu offers an interesting environment, with a mixture of BigData and relational database features, which look promising at the design level. This environment is used to build a prototype to measure the scaling capabilities as functions of data input rates, total data volumes and data query and retrieval rates. In this proceedings we report on the selected data schemas and on the current performance measurements with the Kudu prototype.Conditions evolution of an experiment in mid-life, without the crisis (in ATLAS)
23rd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2018) EPJ Web of Conferences 214 (2019)
Abstract:
The ATLAS experiment is approaching mid-life: the long shutdown period (LS2) between LHC Runs 1 and 2 (ending in 2018) and the future collision data-taking of Runs 3 and 4 (starting in 2021). In advance of LS2, we have been assessing the future viability of existing computing infrastructure systems. This will permit changes to be implemented in time for Run 3. In systems with broad impact such as the conditions database, making assessments now is critical as the full chain of operations from online data-taking to offline processing can be considered: evaluating capacity at peak times, looking for bottlenecks, identifying areas of high maintenance, and considering where new technology may serve to do more with less. We have been considering changes to the ATLAS Conditions Database related storage and distribution infrastructure based on similar systems of other experiments. We have also examined how new technologies may help and how we might provide more RESTful services to clients. In this presentation, we give an overview of the identified constraints and considerations, and our conclusions for the best way forward: balancing preservation of critical elements of the existing system with the deployment of the new technology in areas where the existing system falls short.Optimizing access to conditions data in ATLAS event data processing
23rd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2018) EDP Sciences (2019)
Abstract:
The processing of ATLAS event data requires access to conditions data which are stored in database systems. This data includes, for example alignment, calibration, and configuration information which may be characterized by large volumes, diverse content, and/or information which evolves over time as refinements are made in those conditions. Additional layers of complexity are added by the need to provide this information across the worldwide ATLAS computing grid and the sheer number of simultaneously executing processes on the grid, each demanding a unique set of conditions to proceed. Distributing this data to all the processes that require it in an efficient manner has proven to be an increasing challenge with the growing needs and numbers of event-wise tasks. In this presentation, we briefly describe the systems in which we have collected information about the database content and the use of conditions in event data processing. We then proceed to explain how this information has been used not only to refine reconstruction software and job configuration but also to guide modifications of underlying conditions data configuration and in some cases, rewrites of the data in the database into a more harmonious form for offline usage in the processing of both real and simulated data..The challenges of mining logging data in ATLAS
23rd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2018) EPJ Web of Conferences 214 (2019)