Speaker
Description
ATLAS is one of the big detector experiments at the Large Hadron Collider (LHC) at CERN. The LHC is in scheduled shutdown until end of 2020 for upgrading both collider and detectors which also provides new challenges on the ATLAS distributed computing (ADC). The higher luminosity in the next run will increase significantly data rate and storage needs, and also higher efficiency in the data treatment will be required. We have a longer time scale, the next scheduled upgrade to the High-Luminosity LHC that is foreseen to start during 2026 with an even bigger impact needs long-time preparation, both on worldwide storage and computing infra-structure, and on software tools.
The Iberian ATLAS Tier-1 and Tier-2s in Spain and Portugal form one regional component of the worldwide ADC infra-structure. They have more than 15 years of experience in the deployment and development of LHC computing components and their successful operations. The sites are already actively participating in, and even coordinating, emerging R&D computing activities developing the new computing models needed in the LHC Run3 and HL-LHC periods.
In this contribution, we present details on these development works such as
- HPC computing resources to execute ATLAS simulation workflows;
- the development of new techniques to improve efficiency in a cost-effective way, such as storage and CPU federations;
- recent developments of new monitoring tools that allow a more efficient control of the worldwide computing and storage operations;
- and improvements in Data Organization, Management and Access through storage consolidations ("data-lakes"), the use of data Caches, and improving experiment data catalogues, like Event Index.
The design and deployment of novel analysis facilities using GPUs together with CPUs and techniques like Machine Learning will also be presented.
We present the status of the Iberian ATLAS Tier-1 and Tier-2 sites, taking into account the national perspectives and how they can continue contributing to the significant R&D in computing by evaluating different models and for improving performance of computing and data storage capacity in the LHC High Luminosity era.