How Does The ATLAS experiment manage Petabytes of data?
- Computing Techniques Seminars
- CouchBase - a distributed NoSQL database product for large scale data analysis
- CERN IT-Storage Operations: EOS-CERNBox and future strategy
- How Does The ATLAS experiment manage Petabytes of data?
- Rivet and Professor: HEP analysis archiving and Model tuning
- NEAT Topics
|Full Title:||How Does The ATLAS experiment manage Petabytes of data?|
|Date & Time:||22 May 2017 at 14:30|
|Event Location:||PPD/ Hornets Nest-WH8X- Wilson Hall 8th fl Crossover|
|Event Topic(s):||Computing Techniques Seminar
Dr. Vincent Garonne, Oslo University Abstract:
ATLAS is one of two general-purpose detectors at the Large Hadron Collider (LHC). The interactions in the ATLAS detectors create an enormous flow of data. Many more derived data products and complimentary simulation data are also produced by the collaboration, representing more than 3000 scientists from 174 institutes in 38 countries. In total, 250PB are currently stored in the Worldwide LHC Computing Grid by ATLAS. In this talk, we will discuss the challenges of managing such a huge amount of data and share our experience acquired in designing, running and maintaining such a large system. We will also present the challenges ahead and future developments. Bio:
Vincent Garonne received his Ph.D in distributed computing and scheduling from the University of Marseille, France in 2005. Vincents research focused on the study and modelling of different scheduling approaches for large scale distributed systems. After his Ph.D studies, Vincent worked as a staff member and project leader at CERN, developing Grid middleware and data processing systems. Vincent Garonne is now the ATLAS Distributed Data Management (DDM/Rucio) project leader. The DDM/Rucio system is in charge of managing ATLAS physics data across widely distributed data centers. He is also involved in the ARC (Advanced Resource Connector) project for scientists in Scandinavia and other areas working on LHC experiments (dcache).