IBERGRID 2019 - Delivering Innovative Computing and Data services to Researchers

UTC
Alberto Azevedo (Laboratório Nacional de Engenharia Civil), Ignacio Blanquer Espert (Universitat Politècnica de València), Isabel Campos (CSIC), Jorge Gomes (LIP), Ludek Matyska, Marcin Plociennik (PSNC), Marcus Hardt, Tiziana Ferrari
Description

 

 

The 10th Iberian Grid Conference will take place in Santiago de Compostela, from Monday 23rd to Thursday 26th of September (see also event in the Ibergrid webpage).

The call for Abstract is open (Extended deadline: 7th of July).

All the contributions will have the opportunity to be published as Conference Proceedings. After the conference, accepted contributions will be invited to submit a full manuscript describing the content of the oral presentation or poster. The manuscripts will go through a peer review process.

Students and Early Career researchers (Master and PhD. students) will have the fee waived upon the presentation of the official Master of PhD. registration status. On request, a certificate stating the presentation of the oral contribution or poster will also be provided.

For more information contact: ibergrid2019@lip.pt

Oral Presentations and Posters are welcomed in the following areas:

  • Cooperation between Iberian Research Communities

This track aims at describing research projects jointly undertaken by Spanish and Portuguese research institutions, either by themselves, or in cooperation with institutions from other countries as well. We welcome contributions showing complementary domain expertise and exploitation of synergic effects at the Iberian level.

  • Enabling Research Applications in advanced Digital Infrastructures

The track aims at featuring work related with enabling scientific application integration in the EOSC, EGI and IBERGRID ecosystem. Distributed computing application in Grids and Clouds, distributed data management, data processing in community specific or national data repositories are welcomed.

  • Enabling and fostering Open Science adoption

This track aims to feature contributions describing technical integration, policy development, success stories and/or ongoing work towards the adoption of FAIR data, and in general, developments oriented to foster the adoption of the Open Science paradigm.

  • Development of Innovative Software Services

The track aims to featuring innovative solutions being pursued in service development and implementation. The session welcomes both production level and ongoing developments. Pilot service implementation, and description of its potential usability for wide user communities are also welcomed.

  • R&D for computing services, networking, and data-driven science.

This track aims to show the work of computing and data centers in developing and/or implementing advanced tools, services, infrastructures and methodologies to serve the needs of cutting-edge research projects using Cloud, HPC and Distributed Computing.

Tutorial on Innovative Computing and Data services to Researchers   

On Thursday 26th the project EOSC-synergy organizes one-day tutorial dedicated to introduce scientific and technological users to Cloud infrastructures.

The attendance to the tutorial is free of charge for those registered in IBERGRID 2019. A certificate will be issued to the attendants. 

Spanish Goverment  FCT

    • Welcome & Opening Plenaries
      Convener: Mauro Fernandez Dabouza (CESGA)
      • 1
        Welcome and Opening

        Welcome address by MICIU, FCT, CSIC, EC and CESGA

        • a) Ministerio de Ciencia, Innovación y Universidades
        • b) FCT
        • c) European Commision
        • d) CSIC
        • e) CESGA
      • 2
        IBERGRID status presentation
        Speaker: Jorge Gomes (LIP)
    • 11:00
      Coffee Break
    • e-Infrastructure Plenaries

      IBERGRID, EOSC-synergy keynote invited speakers.

      Convener: Joao Nuno Ferreira
      • 3
        The Research Data Alliance: a (research) data window to the world
        Speaker: Hilary Hanahoe
      • 4
        RDA Spain
        Speaker: Tamara Kovazh
      • 5
        The ascent of scientific computing: the EGI role and contribution towards the European Open Science Cloud

        This presentation provides an overview of the central role of
        distributed data processing to support scientific excellence of
        international collaboration in the past decade.
        We present the architecture and governance model of EGI, the European infrastructure for exabyte-scale computing, and we demonstrate how open science has been benefiting from the power delivered by the EGI Federation, connecting more than 1,000,000 CPU cores worldwide to realize the largest computing platform for research in the world. The presentation concludes by introducing the technical and organizational challenges that scientific computing will face in the coming decade, and the role that EGI and IBERGRID will play in the context of the European Open Science Cloud initiative of the European Commission.

        Speaker: Dr Tiziana Ferrari (EGI.eu)
      • 6
        EOSC-synergy: Expanding the capacity and capabilities of EOSC at the National levels
        Speaker: Dr Isabel Campos
      • 7
        Closing
        Speaker: Christian Cuciniello
    • 13:30
      Lunch break
    • EOSC-synergy Work Packages Break outs: WP2 & WP3

      The session will be organized by WP2 and WP3 leaders.

      Conveners: Jorge Gomes, Marcus Hardt
    • Scientific Data Repositories: a National perspective
      Convener: Ladislav Hluchy
      • 8
        DANS approach to Data repositories in the Netherlands
        Speaker: Mr Gerard Cohen
      • 9
        (FAIR4HEALTH presentation)
        Speaker: Dr Carlos Luis Parra
      • 10
        European Data Incubator: fostering Big Data and AI driven economy in Europe
        Speaker: Dr Diego Lopez de Ipiña
      • 11
        Big data, big responsibility: data lineage management with template for reproducible scientific papers
        Speaker: Mohammad Akhlaghi
      • 12
        Biodiversity Data repositories in Poland
        Speaker: Dr Marcin Plociennik
      • 13
        FCT roadmap for scientific data repositories
        Speaker: Joao Nuno Ferreira
    • 16:00
      Coffee break
    • EOSC-synergy Work Packages Break outs: WP2 & WP3
      Conveners: Jorge Gomes, Marcus Hardt
    • EOSC-synergy Work Packages Breakouts: WP5

      Welcome (Ludek Matyska)

      Round table (All)

      WP5 Presentation, Objectives and Goals (Ludek Matyska)

      T5.1 Landscaping
      overview and current situation
      available/existing information sources
      information to be gathered (identification of)

      T5.2 Gap analysis
      input to T5.1, which information we need for the gap analysis

      T5.3 National and international liaison
      partners and target groups identification
      collection of contacts
      collaboration strategy with target groups

      AOB and Wrap up

      Expected output of the meeting:
      WP5 workplan till the end of the year (Matyska, all)

      Conveners: Ludek Matyska, Tiziana Ferrari
    • IBERGRID Contributions
      Convener: Susana Sanchez Exposito
      • 14
        Using Big Data for Anomaly Detection

        During the last years, Big Data technologies, and in particular Hadoop and HBase, have enabled us to expand enormously the information that we collect and store from all our servers and infrastructures. We no longer need to discard old data using round-robin-databases or restrict the number of active nagios-style checks.

        Now we can take full advantage of a metric collection infrastructure that allows to perform nagios-style checks directly against the metrics database instead of directly accessing the servers.

        The information currently includes tens of thousands of time-series that are stored in HBase as well as a large collection of logs stored in HDFS.

        The next challenge, is to analyze this data to detect anomalous behaviour, usually this is done by the operators looking at different operational dashboards, however data anomaly set has become too large and diverse for manual interpretation.

        To take advantage of these metrics, we started evaluating generic anomaly detection techniques, applying them directly to our time-series and log data. The main problem we encountered when evaluating these generic solutions is that they produce a large number of false positives that greatly reduce their usefulness. It is not practical to have a system that produces so many alerts than it is impossible for operators to investigate all of them.

        So three years ago, we started developing our own custom algorithms to detect anomalies. We will show how this approach enabled us not only to have a better understanding of our systems but also to obtain accurate results for different use cases ranging from SSH attack detection to CPU malfunctioning detection.

        Speaker: Javier Cacheiro (CESGA)
      • 15
        hybrid batch system deployment with AWS spot instances

        Our institution, Port d'Informació Científica (PIC), is an innovative centre for supporting research and provides support to scientific groups working in projects which require large amount of computing resources for the analysis of massive sets of distributed data. PIC is the Spanish Tier-1 center for the Large Hadron Collider, the main (Tier-0) data center for the MAGIC telescopes and the PAU dark energy survey, and is one of the Science Data Centers of ESA's Euclid mission.

        At PIC we have piloted a hybrid cloud computing platform totally integrated in our batch computing service and transparent to the final users. We doubled our computing capacity using AWS spot instances for 72 hours in order to test how we can increase our peak computing needs at an affordable price.

        To test this hybrid batch system infrastructure we have used the HTCondor condor_annex tool, which makes the process of extending a local pool with cloud resources easy, fast and if the user needs it, with an expiration date. In order to get to the production ready system, everything was tested in three steps: small batch of on-demand instances in a test environment, small batch of on-demand and spot-instances in a production environment and big batch of spot instances in a production environment.

        Initially the jobs were sent to our test environment to then be moved to production after checking that the jobs were running correctly, both of them using on-demand instances. The test continued by launching spot-instances in a seamless hybrid infrastructure where the cloud worker nodes were added to the local computing pool and have jobs running in minutes. Accounting and monitoring of the cloud resources has been totally integrated with the local system.

        Amazon Web Services Spot Instances offers the possibility to instantiate machines at a fraction of the on-demand price due to low demand of specific instance types at specific times. When a lot of instances are launched and the conditions to keep them running change, some or all of them can be stopped at any moment. This suits very well use cases like the one tested here.

        There were some other elements needed to configure the system, such as a custom worker node image created and stored in a specific region in AWS or a HTCondor Connection Broker (CCB) to enable communication between the AWS nodes and the local system, apart from the changes in the HTCondor configuration to accept the new servers as own.

        Speaker: Mr Jordi Casals (CIEMAT)
      • 16
        Experience with the GÉANT Cloud IaaS Framework Agreement

        GÉANT has carried out a European wide Framework Procurement for an Infrastructure as a Service (IaaS) cloud portfolio for the European research and education sector. The result was a multi-supplier framework whereby a number of IaaS cloud vendors were awarded framework contracts. Under this framework academic and research organizations from European Union countries can directly contract cloud IaaS services from these vendors through a simplified purchase procedure, while respecting the national public procurement laws of member countries. Within each country the national research network (NREN) is involved in the promotion of the agreement. In order to validate the framework agreement several organizations have been piloting services contracted under the agreement.

        In this communication we describe the piloting activities conducted by the Portuguese Distributed Computing Infrastructure (INCD) aimed at validating and exploiting the capabilities of the GÉANT cloud framework agreement in Portugal. The following aspects will be addressed: legal compliance of the framework to the national public purchase laws; contractual process from the selection of a vendor to the actual service usage; comparison of service offerings and related conditions both against the vendor commercial conditions and against in-house service provisioning; assessment of the billing and payment processes; evaluation of the actual service delivery and support.

        Speakers: André Vieira (INCD), Mário David (LIP / INCD)
      • 17
        Serverless Computing for Data-Processing Across Public and Federated Clouds

        Serverless computing is evolving from the initial Functions as a Service (FaaS) approach to also embrace the execution of containerised applications without the user managing the underlying computing infrastructure. Indeed, the main public cloud providers such as Amazon Web Services or Google Cloud have already started to offer services in this regard. This is the case of AWS Fargate or Google Cloud Run, mainly aimed at the deployment of microservices-based architectures. However, scientific computing can also benefit from the elastic automated management of computational infrastructure for data processing. To this aim, we developed SCAR, an open-source framework to run containers out of Docker images on AWS Lambda which defines a file-processing computing model that is triggered in response to certain events (such as file upload or a REST API invocation). This model was extended for on-premises environments through OSCAR, an open-source platform which enables the users to deploy their file-processing container-based serverless applications on a dynamically provisioned elastic Kubernetes cluster that can be deployed in multi-Clouds, integrated with the EGI Federated Cloud and the EGI Data Hub, based on Onedata.

        In this work we focus on integrating a federated storage for data persistence, in particular the EGI Data Hub, with the ability to dynamically provision computational resources from a public Cloud provider to perform the data processing in response to file uploads. To this aim, we developed OneTrigger, a tool to trigger events from Onedata, that can be run as a serverless function in AWS Lambda in order to use SCAR’s functionality to perform the execution of jobs in AWS Lambda, supporting thousands of concurrent executions. Longer executions, as well as those requiring specialised computing hardware, such as GPUs, are delegated to AWS Batch, a service which enables the unattended and elastic execution of batch computing workloads on the public Cloud. This allows to create hybrid data-processing serverless applications across public and federated Clouds. We demonstrate the feasibility of this approach by introducing a use case in video processing that can leverage GPU-based computing in the public Cloud to dramatically accelerate object recognition, while data persistence is still supported by the federated Cloud.

        Speaker: Sebastián Risco (Universitat Politècnica de València)
      • 18
        Rootless containers with udocker

        udocker (https://github.com/indigo-dc/udocker) is a tool that addresses the problematic of executing Linux containers in user space, i.e. without installing additional system software, without requiring administrative privileges and respecting resource usage policies, accounting and process controls. udocker empowers users to execute applications encapsulated in containers easily across a wide range of Linux distributions and systems including computing clusters.

        udocker implements a subset of Docker commands aimed at searching, pulling, importing, loading and executing containers. The self installation allows a user to transfer udocker and execute it to pull the required tools and libraries. All required binary tools and libraries are provided with udocker and compilation is not required. udocker is an integration tool that incorporates several execution methods giving the user several options to run their containers according to the host capabilities. Several interchangeable execution modes are available, that exploit different technologies and tools, enabling udocker to run in older and newer Linux distributions. Currently udocker supports four execution modes: system call interception and pathname rewriting via PTRACE, dynamic library call interception and pathname rewriting via shared library preload, Linux unprivileged namespaces via runC, and Singularity when locally available. Each approach has its own advantages and limitations, and therefore an integration tool offers flexibility and freedom of choice to better match the applications to the host characteristics. udocker has more than 500 stars on github and is commonly used to execute HTC, HPC and GPGPU applications across datacenters and infrastructures. udocker was developed by LIP in the context of the INDIGO-DataCloud project and is being further extended in DEEP-Hybrid-DataCloud.

        This communication will provide an overview of the udocker capabilities, development status and evolution.

        Speakers: Jorge Gomes (LIP / INCD), Mário David (LIP)
      • 19
        Comparison of Container-based Virtualization Tools for HPC Platforms.

        Virtualization technologies are a fundamental element in cloud computing. Docker is the most known and used container platform worldwide. It is designed for microservices virtualization and application delivery but its model does not fit well with High-Performance Computing (HPC) platforms. HPC environments are multi-user systems where users should only have access to their own data and computing resources. Misconfigured Docker installations pave the way for privilege escalation, including the ability to access other users' data and, at the same time, gaining control of the cluster and computing resources.

        In the world of HPC, the focus of containerised applications is not necessarily on DevOps, but on the ability to minimise HPC node configuration and manage applications’ software dependencies through containers. Several open source initiatives have addressed this problem of bringing containers to the HPC space such as Singularity, Shifter, CharlieCloud and uDocker. In this sense, Singularity seems to be the most popular container system for HPC centres, but there are alternatives such as uDocker that support the execution of containers in user space, a key feature in HPC platforms. Therefore, it is important to analyze the benefits and drawbacks of these solutions when they are deployed in real HPC system and applied to scientific production applications.

        All these tools, with potentially similar characteristics, bring the benefits of the containers to the HPC world. However, it is important to analyze important metrics in order to determine the advantages of one over another. The fields to analyze include, but are not limited to: interaction with Docker, support for Graphics Processing Unit (GPU), support for low-latency interconnects such as InfiniBand, support for Message Passing Interface (MPI), security and portability, privilege model, integration with Local Resource Management Systems (LRMS), among others. The objective of this communication is to show the behaviour and limitations of different container technologies in the context of HPC systems.

        Keywords: virtualization, HPC, uDocker, Singularity, comparison, metrics.


        1 DevOps (Development and Operation) refers in this context to continuous integration/continuous delivery (CI/CD).

        Speaker: Ms Diana María Naranjo Delgado (UPV)
    • Plenary Session on Environmental Sciences
      Convener: Davide Salomoni
      • 20
        Understanding and forecasting the Portuguese marine environment: the activity of Instituto Hidrográfico in the area of physical oceanography.

        Instituto Hidrografico is a Portuguese State Laboratory founded in 1960 which have as main mission the monitoring and study of the marine environment in order to support the Portuguese Navy and to contribute to the national development in the areas of Marine Sciences and Marine Technologies. The activity of Instituto Hidrografico covers domains such as hydrography/cartography, physical oceanography, marine geology, marine chemistry and pollution and safety to navigation. In this contribution we focus on the area of physical oceanography view as an excellent example of the commitments, challenges and opportunities faced today by Instituto Hidrografico. Central in the activity developed in this area is the operation of a large real-time monitoring infrastructure covering the Portuguese marine area, which includes observing systems installed both in land as well as offshore the coast. These different systems generate a large flow of data that is received daily at Instituto Hidrografico and from here disseminated to different users. In addition to this permanent effort of observation other more time-limited programs of observations are conducted, namely during multidisciplinary surveys onboard hydrographic vessels. The observation activity is complemented and extended by numerical modelling activities. Numerical models are used at Instituto Hidrografico to provide in-depth understanding of the oceanographic processes, to allow that a comprehensive 3(4)D picture of the marine environment be built from the observations and to forecast the future evolution of oceanographic conditions from the knowledge of the present state of the ocean. These different areas of activity are supported in a number of infrastructures installed at Instituto Hidrografico, namely computer clusters for parallel computing. They all have benefit from the inclusion of Instituto Hidrografico as partner in different national and Europeans projects such as (among the most recent) the JERICO-NEXT (H2020-INFRAIA), MARISK (INTERREG) or MYCOAST (EU INTERREG Atlantic Area).

        Speaker: Joao Vitorino
      • 21
        SOCIB regional ocean observing and forecasting infrastructure in the Western Mediterranean Sea

        SOCIB (Balearic Islands Coastal Observing and Forecasting System, www.socib.es) is a coastal ocean observing and forecasting infrastructure located in the Western Mediterranean Sea. SOCIB collects and distributes data from near-shore to open ocean through the operation of multi-platform observing systems from fixed moorings, drifting buoys, research vessel, gliders, HF radar, animal tracking systems and beach monitoring stations. It provides free and quality-controlled observations and products to address both science and society needs.

        SOCIB operates three ocean prediction systems aiming to predict the short-term evolution of (1) ocean temperature, salinity, sea level and currents (2) waves and (3) meteotsunamis. Their outputs are being disseminated on the web and integrated in specific SOCIB products and services tailored to the needs of specific sectors and end-users.

        This presentation will provide an overview of the main components of SOCIB, from observations to prediction systems and applications, also including collaborative projects with Iberian partners.

        Speaker: Baptiste Mourre
      • 22
        Computational engineering services for all: LNEC experience as a INCD/IBERGRID/EOSC user

        The computational resources necessary to address major environmental scientific questions are seldom available in-house, making shared e-infrastructures a well-suited medium for performing complex model simulations, analyzing large datasets and applying decision support tools. Despite this potential, the technical expertise required to use these computational resources and to build products on top of them is very specialized and requires a combination of environmental scientists and computer science engineers for their development and maintenance.
        In the scope of the Portuguese Infrastructures Roadmap and of two H2020 European Open Science Cloud e-infrastructures projects, several e-services dedicated to environmental sciences have been developed by LNEC and its partners and made freely available to promote the work of environmental scientists and engineers. These services encapsulate several state-of-the-art numerical models and data analysis tools, and are offered through dedicated, user-friendly Web apps. These tools hide the complexity of e-infrastructures resources allocation from the user and simplify the application of the modeling and data components.
        This presentation presents two of these services in detail:
        • OPENCoastS, a service that assembles on-demand circulation forecast systems for user-selected coastal areas and keeps them running operationally for a period defined by the user, using INCD and IFCA computational resources.
        • WorSiCa (Water mOnitoRing SentInel Cloud platform), a service that integrates remote sensing and in-situ data for the determination of water presence in coastal and inland areas, applicable to a range of purposes from the determination of flooded areas (from rainfall, storms, hurricanes or tsunamis) to the detection of large water leaks in major water distribution networks.
        The OPENCoastS service is based on the application of the modeling suite SCHISM and generates daily forecasts of water levels and vertically averaged velocities over the region of interest for 48 hours, based on numerical simulations of the relevant physical processes.
        WorSiCa is a one-stop-shop service to provide access to customized remote sensing services based on Copernicus data, currently applied to the detection of the coastal water land interface and the inland water detection (for large water infrastructure leak detection).

        Speaker: Anabela Oliveira
      • 23
        Improving access and use of GBIF through infrastructure cooperation at the Iberian level

        The Global Biodiversity Information Facility (GBIF) is a global government-level effort to mobilise and make freely available online primary biodiversity data for all biological groups. Through GBIF, more than 1.3 billion records are currently available globally and 39 million for the Iberian Peninsula. Both Portugal and Spain implemented national data portals to facilitate users’ access to biodiversity data in full context and advanced ways, not available at the global level.

        The main drivers of biodiversity distribution are related to environmental and climatic factors. Species occurrences are not constrained by political borders. Therefore, access to biodiversity data for scientific and management purposes should be possible under a biogeographic context, enabling analysis of information in ecologically meaningful scopes. Moreover, information systems and technological platforms should promote cross-border cooperation, so that species distribution modelling, species invasion, red listing and other conservation efforts can be seamless performed by researchers and users at the iberian level.

        Many GBIF participants have adopted the opensource Atlas of Living Australia (ALA) platform, creating the community Living Atlases (LA), in which both Portugal and Spain participate. In these countries, the national portals are supported by cloud computing services provided by IBERGRID partners, INCD and IFCA, respectively. These portals have been operating for more than three years, providing thousands of accesses annually. The LA architecture is modular, including several APIs built on top of an infrastructure layer of databases (Cassandra, MySQL), file storage and indexes (SOLR). It is on the creation and configuration of this infrastructure layer that the cloud computing excels, particularly in testing and updating environments.

        The web applications of LA platform provide information integration, allowing visualization of data on lists, maps, images formats and metadata. It is possible to create online reports of species lists based on localities or areas. Using spatial modules, biodiversity information can be crossed with geographic or spatial and environmental data, providing even more detailed reports. There are also analysis tools to perform species distribution modeling, red list assessments and other biodiversity-based analysis.

        A single infrastructure of LA can support different portals using the hub module. In this way, it is possible to enable a thematic, an institutional or a regional portal. In this presentation, we will explore how the platform can be extended to share biodiversity data across Portugal and Spain, providing biogeographic-based facets that allows searches and analysis, without breaks due to administrative borders. We will also discuss how cloud-based services based on the grid computing community can facilitate this integration at the Iberian level, enabling also redundancy of security and availability of service. This shared vision between GBIF Portugal and GBIF Spain, in the scope of the national infrastructures PORBIOTA and LifeWatch-ES, may contribute to a better support to research studies and natural resource management at the Iberian level.

        Speaker: Rui Figueira
      • 24
        LifeWatch ERIC: Consolidating synergies among Iberian e-Biodiversity communities through IBERGRID-IBERLIFE & EOSC (Synergy) initiatives
        Speaker: Dr Juan Miguel Gonzalez Aranda
    • 11:00
      Coffee break
    • Research Communities & EOSC

      International research communities and ESFRI clusters

      Convener: Anabela Oliveira
      • 25
        ESCAPE ESFRI cluster presentation
        Speaker: Dr Patrick Fuhrmann (DESY)
      • 26
        Cosmology @EOSC
        Speaker: Prof. Francisco Prada
      • 27
        Computational challenges related to IFMIF and DONES facilities.

        Following ITER, DEMO reactor is expected to demonstrate the feasibility of safe,
        environmentally friendly and economically viable fusion power generation. During operation
        of DEMO, the materials will be exposed to a particular hostile environment as a consequence
        of the energetic neutrons created by fusion reactions in the plasma. The level of damage
        expected in fusion conditions is such that the performance of materials and components under
        these extreme irradiation conditions is unknown. One of the central objectives of the fusion
        materials program is to identify innovative materials development routes, using scientific
        understanding and knowledge of how materials properties evolve and change in the operating
        environment of a fusion power plant.
        In this respect, IFMIF is considered as one of the main pillars in the international fusion
        program. Its double deuteron beam 125 mA each will produce enough rate of damage behind
        the lithium target to make available in a few years information on materials damage at DEMO
        relevant doses. On the other hand, DONES (DEMO Oriented Neutron Source) has been
        conceived as a simplified IFMIF-like plant to provide in a reduced time scale and with a
        reduced budget – both compared to IFMIF- the basic information on materials damage.
        Although both facilities are designed to provide experimental data on how the material
        properties change under energetic neutron irradiation, the design of experiments to be carried
        out to test materials implies various computational challenges. During our talk we shall review
        the different computational fields associated to IFMIF and DONES facilities, such as beam
        dynamics, neutronic transport, calculation of collision cascades and the simulation of the
        microstructure evolution in the irradiated materials.

        Speaker: Dr Christophe Ortiz (LNF - CIEMAT)
    • 13:00
      Lunch break
    • EOSC-synergy Thematic Services (WP4)
      Convener: Ignacio Blanquer Espert (Universitat Politècnica de València)
    • EOSC-synergy Work Packages Break outs: WP2 & WP3
      Conveners: Jorge Gomes (LIP), Marcus Hardt
    • IBERGRID Contributions
      Convener: Andrés Gomez Tato
      • 28
        Machine Learning Pipelines on Medical Imaging

        The use of Artificial Intelligence (AI) over medical data allows the extraction of features associated to the disease from medical images using data-characterisation and modelling algorithms. The use of advanced machine learning algorithms is changing the way image processing is performed, evolving from analytic solutions to models built up with supervised training techniques working in complex Convolutional Neural Network (CNN) architectures. However, advanced AI techniques require a deep understanding of the behaviour of the model and non-trivial programming skills. This limits the application of AI to researchers who have a deep understanding of the medical problem but lack from those specific technical skills.
        In this work, we will compare an application for the automatic diagnosis of Rheumatic Heart Disease (RHD) from echocardio videos on children, implemented using Keras with an equivalent application deployed using a machine learning workflow system (LEMONADE).
        The processing pipeline requires 7 steps: frame splitting, which splits a video into frames; automatic classification into doppler and anatomical images by color inspection (only doppler images are used during the rest of the pipeline); color-based segmentation through k-means clustering; image preprocessing and view classification by using a CNN; first- and second-order texture analysis and blood-flow velocity calculation; z-score features normalization; and classification of extracted features through machine learning techniques into RHD positive or healthy studies.
        The implementation in Keras uses the pre-trained models for the classification of the views within the estimation of the RHD. All the components are delivered as containers, facilitating their distribution and the integration of new components in LEMONADE.
        The processing backend is a Kubernetes cluster provided of GPU nodes attached through PCI passthrough to the Virtual Machines and the containers. This way there is no penalty on the usage of the GPUs from the applications. Data are stored directly on a persistent storage object exported through an SSH server. As communications are encrypted, data access is measured separately.

        Speaker: Ignacio Blanquer Espert (Universitat Politècnica de València)
      • 29
        DEEP-Hybrid Datacloud: a project summary

        The DEEP-Hybrid-DataCloud project researches on intensive computing
        techniques such as deep learning, that require specialized GPU hardware
        to explore very large datasets, through a hybrid-cloud approach that
        enables the access to such resources. DEEP is built on User-centric
        policy, i.e. we understand the needs of our user communities and help
        them to combine their services in a way that encapsulates technical
        details the end user does not have to deal with. DEEP takes care to
        support users of different levels of experience by providing different
        integration paths. We show our current solutions to the problem, which
        among others include the Open Catalog for deep learning applications,
        DEEP-as-a-Service API for providing web access to machine learning
        models, CI/CD pipeline for user applications, Testbed resources. We also
        present our use-cases tackling various problems by means of deep
        learning and serving to demonstrate usefulness and scalability of our
        approach.

        Speaker: Dr Alvaro Lopez Garcia (IFCA-CSIC)
      • 30
        A Data Science framework in the INCD

        INCD - National Distributed Computing Infrastructure is a Portuguese digital infrastructure designed to support the national scientific community, providing computing and storage services to the national scientific and academic community in all areas of knowledge. LNEC – National Laboratory for Civil Engineering is one of the partners that collaborate in this initiative, developing use cases that take advantage from the available infrastructure. This work reports a Data Science framework based on Conda that was developed as part of this collaboration. The use of this framework allows researchers to benefit from the INCD infrastructure, running their research scripts, using the several Conda packages available, including Jupyter Notebook. To showcase the framework, two case studies were implemented, demonstrating the use of Machine Learning algorithms applied to data generate from the dam safety monitoring systems.
        The first case study presents a prediction setting, implemented in Python, in which Multiple Linear Regression (MLR) and Neural Networks (NN) are trained and used to predict dam behavior in manually collected data. Environmental variables are used as predictors for both the MLR and the NN. Both predictions are evaluated and compared, also using the develop framework. Note that such predictions heavily depend on the specific properties of each data set. Thus, the capabilities of this environment on top of INCD infrastructure enable a flexible adaptation of each prediction that can be easily tuned to each specific case.
        A classification task is proposed in the second case study, implemented in Python and R, using the DBSCAN (Density-based spatial clustering of applications with noise) clustering algorithm to identify outliers in automatically collected data from sensors installed on Portuguese dams. Together with the dam response variable, environmental variables are used to obtain the clusters and detect outliers. Afterwards, PCA (Principal Component Analysis) is used to obtain a 2D plot to visualize outliers identified by the DBSCAN.
        Other pieces of research were also developed using this framework, including the use of Deep Learning (more specifically, Recurrent NN) to improve prediction of dam behavior, using Keras and TensorFlow, which benefited from the INCD infrastructure for improved computation times.
        Finally, it is important to remark that framework was recently presented to several LNEC researchers and was received with a large interest, with most of the researchers already starting to use INCD to create and run their scripts in a cloud environment.

        Speaker: António Antunes (Laboratório Nacional de Engenharia Civil)
      • 31
        Using HPC to enable coastal waters observatories

        Coastal systems are among the most productive ecosystems in the world, providing multiple resources and guaranteeing the resilience of the coastal communities. Climate change (e.g., sea level rise) represents a major threat to the world’s coastal systems, via potential increases in salinity, acceleration in the nutrients cycling and disruption of aquatic ecosystems. Also, recent and predicted increases of nutrients loads to coastal systems may exacerbate these impacts.
        Coastal waters observatories can support both the daily and the long-term management of coastal ecosystems, allowing the continuous surveillance of coastal zones and the establishment of adaptation measures. In the project UBEST the concept of coastal waters observatories is extended and demonstrated in two Portuguese coastal systems, the Tagus estuary and the Ria Formosa, to improve the global understanding of the biogeochemical buffering capacity of coastal ecosystems and their susceptibility to future scenarios of anthropogenic inputs and climate change.
        The observatories developed in UBEST include several layers of information that integrate historical and real-time observations, forecasts, scenarios analysis and indicators in a comprehensive web-portal. The integration of all these layers provides information that covers different temporal scales, presented with different levels of complexity, enabling the end-users with more robust tools to support decision-making. However, the extension of the coastal waters observatories to integrate more layers of information brings several challenges, among them the requirement of more computational resources. In this context, High Performance Computing (HPC) is a powerful resource to enable the next generation of coastal waters observatories.
        HPC, such as grid clusters, parallel computing or cloud computing, is used by the coastal modeling community to solve complex, very demanding problems. In UBEST, HPC is used at two levels: i) for high-resolution forecasts and scenarios simulations of the circulation and water quality dynamics in the two coastal systems, and ii) to provide computational power to process data and model results through predefined or user requests at the web-portal.
        The simulations in the Tagus estuary and the Ria Formosa are performed with SCHISM, a parallelized model that uses the MPI (Message Passing Interface) paradigm. Daily forecasts of water levels and 3D currents, salinity, temperature and biogeochemical variables are deployed with the WIFF – Water Information and Forecasting Framework and the OPENCoastS service. The scenarios analysis provides long-term information of the biogeochemical buffering capacity of each system under present conditions and for scenarios of climate change (e.g. sea level rise) and anthropogenic pressures (e.g. wastewater discharges). The use of HPC allows both the timely production of daily forecasts and the generation of long-term simulations for the scenarios.
        The UBEST web-portal, developed using Django, allows the access to all the data and model results through four dashboards: Data, Forecasts, Scenarios and Indicators dashboards. Several services and products are made available to the users, such as statistics of historical data, data on virtual sensors, and physical and water quality indicators.
        The implementation of HPC in the UBEST water observatories was achieved using the INCD – the Portuguese National Infrastructure for Distributed Computing.

        Speaker: Dr Anabela Oliveira (Laboratório Nacional de Engenharia Civil)
      • 32
        RESCCUE RAF app – an IT solution for digital interactive urban resilience assessment

        Climate change (CC) adaptation plays an important role in city and services management and resilience building, targeting the mitigation and adaptation to potential hazards in urban areas. Information technologies can play a leading role to promote fast adoption of the most relevant measures towards CC preparedness. In this paper, a web application is presented with the objective of empowering city and services managers with an accessible and reliable tool. The RESCCUE RAF App materializes a detailed CC resilience evaluation methodology with a user-friendly Web interface. It provides an evaluation of city resilience to CC impacts and urban systems vulnerabilities allowing to assess multi-sector dependencies under multiple CC scenarios. This app is integrated as a service of the Portuguese Infrastructures Roadmap, under the Infraestrutura Nacional de Computação Distribuída (INCD) infrastructure initiative that provides to the app the resources for data computation and storage, and assures its scalability to handle multiple user requests as well as database storage growth. The information provided by this app empowers city and urban services managers with an assessment allowing to know where they stand and to identify the resilience gaps, thus supporting decision on the most advantageous investments on the city and services and planning to cope with future challenges. Three case studies are being carried out in different cities (Barcelona, Lisbon and Bristol). The access to the application is made using credentials given upon request, to ensure data confidentiality. Inside the user’s area, the user can fill, in an interactive way, detailed information about the selected city, regarding multiple aspects such as financial plan per service, date of last review of the City Master Plan, history of climate hazards in the city or level of dependency between services. This information is then processed and several indicators are calculated on-the-fly. The assessment allows to identify development levels, ranging from the whole city to a more detailed assessment regarding a specific service. Data is stored at INCD’s in RESCCUE RAF app database and can be easily analyzed and extracted by the user. These results support the city and services managers in making effective decisions to plan city resilience enhancement. In this paper, a detailed presentation of the architecture and computational choices behind the RESCCUE RAF App and its Web interface and their integration in the INCD infrastructure will be presented. Given its importance, generic nature and flexible structure, the RESCCUE RAF App can be extended to other cities and, in the future, to other urban services or hazards, taking advantage of the INCD e-infrastructure. This complete and in-depth assessment of city resilience to CC challenges at Portuguese, Iberian and European scale is fundamental to plan CC adaptation and strategies implementation, preventing both human and material losses as well as environmental damages.

        Speaker: Mr Pedro Lopes (Laboratório Nacional de Engenharia Civil)
      • 33
        Using Cloud Computing and Open Data to Improve Knowledge in the Insurance Sector

        We present a technology transfer project where Cloud Computing and Open Data play a crucial role. Our aim is to accurately and efficiently model data from the Spanish car insurance sector. Due to the vast amount of data and the complexity of the models, the use of Cloud Computing is needed to ensure not only an efficient but also a feasible implementation of the model. The system was deployed on the OpenStack cloud platform of our Institute and it is portable to other cloud services such as Amazon Web Services. In addition to the usage of cloud technologies we also benefit from Big Data tools such as TensorFlow, ElasticSearch, Kibana or Spark.

        The insurance sector is an important and growing sector of the Spanish’s economy, representing a 5.5% of the GDP in 2017. Our data comes primarily from the quote calculator Avant2 of the software company Codeoscopic. This calculator, allows insurance agents to evaluate a specific risk (vehicle, driver,...) with many insurance companies and get quotes for different modalities. However, the companies’ quoting criteria is a black-box. Finding information about this underlying process could shed light on understanding the differences between companies or regions and, ultimately, improve the Avant2 platform. Nonetheless, the companies’ quotes were not completely explained by using only direct variables associated with the risk. To overcome this hurdle, we will also nourish our model with geographical data such as climate conditions, traffic accidents or socio-economic variables. This information was collected from several open source portals. Once we incorporated the open data component we find a significant improvement on the model’s accuracy compared to only using internal data.

        Speaker: Gonzalo Ruiz (Instituto Universitario de Investigación Biocomputación y Física de Sistemas Complejos, Universidad de Zaragoza)
      • 34
        TRAFAIR: Understanding Traffic Flow to Improve Air Quality

        Road traffic is among the main sources of air pollution, and taking into account that air pollution causes 400 000 deaths per year, making it first environmental cause of premature death in Europe, environmental impacts of traffic are of major concern throughout many European metropolitan areas.

        In February 2017, the European Commission warned five countries, among which Spain and Italy, of continued air pollution breaches. In this context, public administrations and citizens suffer from the lack of comprehensive and fast tools to estimate the level of pollution on an urban scale resulting from varying traffic flow conditions that would allow optimizing control strategies and increase air quality awareness.

        TRAFAIR project surged from this premise, it brings together 9 partners from two European countries (Italy and Spain) to develop innovative and sustainable services combining air quality, weather conditions, and traffic flows data to produce new information for the benefit of citizens and government decision-makers. The project started in November 2018 and will lasts two years.

        The TRAFAIR project aims at achieving four main results:
        1) Definition of a standard set of metadata (based and extending the ones adopted at European level and defined by FAIRMODE) able to represent urban air quality maps.
        2) Provision of real time estimation on air pollution in the city on an urban scale (using a set of low-cost air quality sensors, and combined them with measurements by the regulatory air quality stations in order to build an informative map of the different levels of pollution in the urban areas).
        3) Development of a service for prediction of urban air quality based on weather forecast and traffic flows. This service make use of open source and HPC technologies in order to compute the estimation of the diffusion of pollutants in the urban area. .
        4) Publication of an open dataset describing the urban air quality maps and the prediction maps in 6 European cities of different size on which the service will run for the duration of the project: Zaragoza (600000 inhabitants), Florence (382000), Modena (185000), Livorno (160000), Santiago de Compostela (95000), Pisa (9.000). These datasets (including metadata) will be published on catalogs harvested by the European Data Portal.

        The project is co-financed by the European Commission under the CEFTELECOM call on Open Data.

        Speaker: Cecilia Grela Llerena (Galicia Supercomputing Centre (CESGA))
    • Tutorial: Enhance your Data Infrastructure with Onedata, Jupyter and beyond.

      Hands-on session for sysadmins and programmers.

    • WP6 Breakouts
      Convener: Marcin Plociennik (PSNC)
    • 16:15
      Coffee break
    • Cross-Work Package meeting: WP4-WP3-WP2
      Convener: Isabel Campos (CSIC)
    • Tutorial: dCache: Managing exabytes of geographically distributed data with dCache - cloud enabled storage at your fingertips

      Tutorial Oriented to end-users

    • WP6 Breakouts
      Convener: Marcin Plociennik (PSNC)
    • 19:30
      Social events

      Guided tour through the historical streets of Santiago de Compostela, followed by Conference Dinner

    • Innovative Software Services
      Convener: Jorge Gomes
      • 35
        IBM-Q - Online Quantum Computing Platform

        CSIC and IBM have signed a contract to provide researchers with access to a quantum computer with 20 qubits, via a cloud service. This contract extends the possibilities already existing with the 5-qubit and 16-qubit free tier services, towards devices with greater quantum volume. This talk will offer a superficial overview of quantum computing --operations, implementation, potential--, focusing on how these services are actually offered and used, and how they can integrate in hybrid quantum-classical work pipelines.

        Speaker: Juan José García Ripoll
      • 36
        Serverless: What’s in a name for scientific computing?
        Speaker: Germán Moltó
      • 37
        Innovative Software Services
        Speaker: Davide Salomoni
    • 11:00
      Coffee break
    • Evolution of production software tools in EOSC

      Contributions

      • Cloud IaaS/PaaS and EGI Notebook service - Enol Fernandez
      • Federated data management requirements and technical roadmap - Baptiste Grenier
      • Check-in technical roadmap and RCauth status and plans - Nicolas Liampiotis
      • DEEP-HybridDataCloud: present and future - Alvaro Lopez
      • eXtreme DataCloud: present and future - Daniele Cesini
      • SQAaaS - Evolution of best practices for service deployment and interoperability checks - Joao Pina
      Conveners: Doina Cristina Duma (INFN), Mario David (LIP)
      • 38
        Welcome

        Presenting the goal of session and contributors

        Speakers: Doina Cristina Duma (INFN CNAF), Mário David (LIP)
      • 39
        Cloud IaaS/PaaS and EGI Notebook service

        This presentation will provide an overview of the ongoing and new developments coming to 3 of the EGI computing services: Cloud Compute — which offers a federated multi-cloud IaaS — , Cloud Container Compute — which offers a Kubernetes-based platform for running docker applications — and Notebooks, a completely managed interactive computing service based on Jupyter.

        Speaker: Enol Fernández (EGI Foundation)
      • 40
        Federated data management requirements and technical roadmap

        The presentation provides an overview of the requirements gathered during the Data Management Workshop where XDC, ESCAPE and EGI met three important user communities to design with them some Research Infrastructure specific solutions and pilot activities. After this the EGI data-related services and their status are presented and it eventually looks forward presenting some scouting activities highlighting solutions that could complement and augment the EGI service offering.

        Speaker: Baptiste Grenier (EGI Foundation)
      • 41
        Check-in technical roadmap and RCauth status and plans

        The EGI Check-in service is an Identity and Access Management solution that makes it easy to secure access to services and resources. Check-in is one of the enabling services for the EOSC-hub AAI following the architectural and policy recommendations defined in the AARC project. Through Check-in, users are able to authenticate with the credentials provided by the IdP of their Home Organisation (e.g. via eduGAIN), as well as using social identity providers, or other selected external identity providers. Check-in provides an intuitive interface for communities to manage their users and their respective groups, roles and access rights. For communities operating their own group management system, Check-in has a comprehensive list of connectors that allows to integrate their systems as externally managed Attribute Authorities. The adoption of standards and open technologies, including SAML 2.0, OpenID Connect, and OAuth 2.0, facilitates integration with web-based services. Options to support non-web services, which traditionally relied on X509 certificates, are based around the concept of online authorities with attached credential stores, such as RCauth.eu with a tightly-coupled MyProxy server. Such techniques allow science gateways to obtain credentials on behalf of the end-user that can be used to directly authenticate to services. Another user-centric approach considers certificate proxies as opaque tokens that can be obtained from a credential store from the command-line using SSH authentication. The deployed RCauth.eu and MasterPortal service from AARC features both these capabilities and has been shown to work for the production EGI and WLCG environments. The currently-operational RCauth.eu is being re-engineered to allow for state consistency between a geographically distributed set of hosting sites. The presentation will provide an overview of the EGI Check-in technical roadmap and the evolution of the RCauth service towards a distributed deployment architecture.

        Speakers: Micha Sallé (NIKHEF), Nicolas Liampotis (GRNET)
      • 42
        DEEP-Hybrid Datacloud: present and future

        The DEEP-Hybrid-DataCloud is providing a set of comprehensive services for machine learning and deep learning, allowing scientists to train, test, evaluate, share and exploit their models over distributed e-Infrastructures. New advancements, will be presented and described, future exploitation of the solutions proposed

        Speaker: Dr Alvaro Lopez Garcia (IFCA-CSIC) (IFCA-CSIC)
      • 43
        eXtreme DataCloud: present and future

        The eXtreme DataCloud (XDC) project is aimed at developing data management services capable to cope with very large data resources allowing the future e-infrastructures to address the needs of the next generation extreme scale scientific experiments. Started in November 2017, XDC is combining the expertise of 8 large European research organisations, the project aims at developing scalable technologies for federating storage resources and managing data in highly distributed computing environments.
        The state of the art of the developed solutions, together with the new advancements, will be presented and described during the session.

        Speaker: Daniele Cesini (INFN - CNAF)
      • 44
        EOSC-hub TCOM SQA area: status and future

        Overview of the EOSC-hub Technology Committee (TCOM) work done for the Software Quality Assurance area with special focus on the EOSC-hub technical workshop that served as input for EOSC architecture and service roadmap.

        Speaker: João Pina (LIP) (LIP)
    • IBERGRID Contributions
      Convener: Nadia Tonello
      • 45
        Baseline criteria for achieving software quality within the European research ecosystem

        Releasing the “A set of Common Software Quality Assurance Baseline Criteria for Research Projects” document (hereby referred to as “SQA baseline criteria”) resulted from the need of filling up an uncovered gap in the European research software engineering ecosystem. This document sets a Software Quality Assurance (SQA) plan that maintains a pragmatic set of requirements, best practices and recommendations to drive an adequate development, timely delivery and reliable operation of the produced software assets within a research software development project.

        The SQA baseline criteria covers the basic practices of making the source code open and accessible, pointing to the relevant open-source licenses and code hosting platforms. In what relates to source code management, it provides specific guidance in the usage of a change-based approach, by means of a version control system (VCS), that relies on a branching model to handle the addition of incoming new features or bug fixes, separating development and stable versions. Every relevant change in the code must be tested to avoid disruptions in the supported major branches or releases.

        By following the aforementioned change-based approach, the SQA baseline criteria emphasizes the idea of acting at the early stages of the software lifecycle as the catalyst for maximizing the effectiveness of resolving issues (bugs, security flaws) with the lowest effort and cost. In this regard, the primary focus is put on the static analysis testing (such as unit/functional testing and vulnerability scanning), encouraging developers to have meaningful test cases that provide enough coverage of the system operation. At this stage, the readability and maintainability of the code are also essential quality requirements, achievable by making the source code compliant with a relevant programming language’s style standard.

        The documentation attached to the software is key to its adoption, and the SQA baseline criteria suggests that it be treated as code, through the use of markup languages and VCSs. Consequently, the documentation is versioned, with the capability of being rendered in multiple online documentation repositories. As the last requirement in the described change-based approach, a human-based review shall be performed in order to consider a set of aspects that cannot be assessed automatically, such as the change suitability or the understandability of the documentation.

        The best practices at later stages include the interoperability assessment by the execution of integration tests that ensure the operation with external components, open standards and protocols. A further security analysis is also performed at this stage, by checking common security flaws, thus covering two of the fundamental pillars of the dynamic analysis of the software.

        The SQA baseline criteria as here presented has been elaborated based on the first-hand experiences of several European-funded software development projects. It is actively maintained (currently on version 2.0), online available, and open to collaboration and discussion. The aim is to keep improving and extending the document in order to consolidate it as a reference point for future research projects that involve development of software.

        Speakers: Pablo Orviz Fernández (IFCA-CSIC), Cristina Duma
      • 46
        Orchestrated satellite data management

        With the latest missions launched by ESA or NASA, such as Sentinel or Landsat, equipped with the latest technologies in multispectral sensors, we face an unprecedented amount of satellite data never reached before. Exploring the potential of this data with state-of-the-art Artificial Intelligence techniques such as “Deep Learning" could potentially change the way we understand the Earth system and how to protect its resources.

        The eXtreme-DataCloud project (XDC), under the umbrella of the H2020 programme, aims at developing a scalable environment for data management and computing, addressing the problems of the growing data volume and focused in providing a complete framework for research communities through the European Open Science Cloud. The target of this project is to integrate different services and tools based on Cloud Computing to manage Big Data sources, and Use Cases from diverse disciplines are represented. One of the goals of the project is to deal with extremely large datasets, including diverse data and metadata types, formats and standards that enable the automatic integration of Big Data.

        In order to interoperate those big data sources, the XDC LifeWatch ERIC Use Case proposes a Virtual Research Environment (VRE) deployed on the Cloud that allow the users to preprocess the satellite data to obtain valuable information about the water quality of lakes and reservoirs without the need of using local resources as well as hiding the complexity behind. The architecture of this virtual environment consists of different Docker containers that run automatically with a common distributed storage system (Onedata) capable of storing the data with associated metadata that facilitate the discovery. The workflow of the VRE to preprocess the satellite data is manage by the INDIGO PaaS Orchestrator.

        This presentation will describe the architectural design of the VRE and the different components (Jupyter interface, docker deployment for data preprocesing, modelling, etc.) as well as details on how this cloud-based approach can be adopted to many other cases.

        Speaker: Mr Daniel Garcia Diaz (IFCA)
      • 47
        The CERN analysis preservation portal

        The CERN analysis preservation portal (CAP) comprises a set of tools and services aiming to assist researchers in describing and preserving all the components of a physics analysis such as data, software and computing environment. Together with the associated documentation, all these assets are kept in one place so that the analysis can be fully or partially reused even several years after the publication of the original scientific results. An experiment-specific submission and retrieval interface has been developed for the CMS collaboration. It integrates with the CMS internal analysis registry (CADI) to capture all analyses with basic information, complemented with a detailed submission form for full information. The CMS data aggregation system (DAS) is interfaced to the deposit form to assist in filling in exact dataset names used in the analysis to ensure searchability. Efforts are ongoing to describe physics content for an intelligent retrieval, and to interface with container solutions for full reproducibility for selected test cases.

        Speaker: Lara Lloret Iglesias (CSIC)
      • 48
        Past and Future Challenges for Distributed Computing at the ATLAS Experiment on the Iberian Peninsula

        ATLAS is one of the big detector experiments at the Large Hadron Collider (LHC) at CERN. The LHC is in scheduled shutdown until end of 2020 for upgrading both collider and detectors which also provides new challenges on the ATLAS distributed computing (ADC). The higher luminosity in the next run will increase significantly data rate and storage needs, and also higher efficiency in the data treatment will be required. We have a longer time scale, the next scheduled upgrade to the High-Luminosity LHC that is foreseen to start during 2026 with an even bigger impact needs long-time preparation, both on worldwide storage and computing infra-structure, and on software tools.
        The Iberian ATLAS Tier-1 and Tier-2s in Spain and Portugal form one regional component of the worldwide ADC infra-structure. They have more than 15 years of experience in the deployment and development of LHC computing components and their successful operations. The sites are already actively participating in, and even coordinating, emerging R&D computing activities developing the new computing models needed in the LHC Run3 and HL-LHC periods.
        In this contribution, we present details on these development works such as

        • HPC computing resources to execute ATLAS simulation workflows;
        • the development of new techniques to improve efficiency in a cost-effective way, such as storage and CPU federations;
        • recent developments of new monitoring tools that allow a more efficient control of the worldwide computing and storage operations;
        • and improvements in Data Organization, Management and Access through storage consolidations ("data-lakes"), the use of data Caches, and improving experiment data catalogues, like Event Index.

        The design and deployment of novel analysis facilities using GPUs together with CPUs and techniques like Machine Learning will also be presented.
        We present the status of the Iberian ATLAS Tier-1 and Tier-2 sites, taking into account the national perspectives and how they can continue contributing to the significant R&D in computing by evaluating different models and for improving performance of computing and data storage capacity in the LHC High Luminosity era.

        Speaker: Helmut Wolters (LIP)
      • 49
        Data Science in High Energy Physics

        High Energy Physics is a big data task that requires modern data science tools for storage, processing and analyzes. In this contribution we aim to overview the applications of machine learning, namely the modern deep learning approach, to aid research in collider physics and related topics. More specifically we will show how Convolutional Neural Networks can help us learn about new observables for jet physics and other Artificial Neural Networks are becoming the new the paradigm for data analysis at the Large Hadron Collider. Due to the complexity of the task and volume of the data used these neural networks are implemented in Keras using Tensorflow and trained on high performant Graphical Processing Units.

        Speakers: Rute Pedro (LIP/FCUL), Tiago Vale (LIP)
      • 50
        Distributed Computing at the CMS Experiment

        Being one of the largest international scientific collaborations, CMS faces many challenges. To serve the computational needs of every researcher working around the world within the Collaboration, CMS relies on distributed computing technology for both computing power and data storage. The Large Hadron Collider (LHC) schedule alternates between data-taking periods and long shutdowns for maintenance and upgrades. Currently on the Long Shutdown 2, the CMS detector is being upgraded. Run 3 is scheduled to start in 2021 with an increase in luminosity. These two facts combined will pose new challenges for LHC's distributed computing and data storage infrastructure called the Worldwide LHC Computing Grid (WLCG). Aiming to increase the nominal luminosity by a factor of 5-7 a major upgrade to the LHC is expected to start after run 3 in 2026 called the High Luminosity Large Hadron Collider (HL-LHC). Preparations for this upgrade have already started.

        As a member of the WLCG collaboration, Portugal has pledged to contribute to CMS Tier-2 sites with CPU and storage responsibilities.

        In this talk, we will present a brief overview of the involvement of the Portuguese group in the CMS experiment. Starting from the physics analysis being done at LIP CMS group to the computational needs we foresee for the next 10 years. We will cover the tools used for our physics analyses, our computational needs, the Portuguese role in the Tier-2 management and how we are going to address the expected necessities in the next 10 years.

        Speaker: Diogo de bastos
    • 13:00
      Lunch break
    • Tutorial: Integrating Quantum Computing applications in the IBM Q
    • XDC project meeting (CLOSED)
      Convener: Daniele Cesini
    • 16:00
      Coffee break
    • EOSC-synergy AHM
    • Tutorial: Authorization & Authentication on Clouds

      Oriented to software developers and service integrators

    • XDC project meeting (CLOSED)
      Convener: Daniele Cesini
    • EOSC-synergy Collaboration Board

      First meeting of the Collaboration Board