Fermilab Computing Division

CS Document 3939-v1

Guaranteeing data availability and consistency: data transfer operation for CMS

Document #:
Document type:
Submitted by:
Oliver Gutsche
Updated by:
Oliver Gutsche
Document Created:
23 May 2010, 09:59
Contents Revised:
23 May 2010, 09:59
Metadata Revised:
23 May 2010, 09:59
Viewable by:
  • Public document
Modifiable by:

Quick Links:
Latest Version

The multi-tiered computing infrastructure of the CMS experiment at the LHC relies on the reliable and fast transfer of data between the different CMS computing sites. Data has to be transferred from the Tier-0 to the Tier-1 sites for archival in a timely manner to avoid overflowing disk buffers at CERN. Data has to be transferred in bursts to all Tier-2 level sites for analysis as well as synchronized between the different Tier-1 sites. The data transfer system is the key ingredient which enables the optimal usage of all distributed resources. The operation of the transfer system consists of monitoring and debugging of transfer issues. In this talk, we present the operational procedures developed to guarantee a timely delivery of data to all corners of the CMS computing infrastructure. Further task of transfer operation is to guarantee the consistency of the data at all sites, both on disk and on tape. Procedures to verify the consistency and to debug and repair problems will be discussed. The 2010 data taking period will be summarized from the point of view of transfer operations and lessons will be drawn for future data taking periods.
Files in Document:
CMS DataOperations
Associated with Events:
CHEP 2010 held from 18 Oct 2010 to 22 Oct 2010 in Taipei, Taiwan
DocDB Home ]  [ Search ] [ Authors ] [ Events ] [ Topics ]

DocDB Version 8.8.9, contact Document Database Administrators