The ATLAS experiment is a large detector for the study of high-energy proton-proton collisions at the Large Hadron Collider (LHC), presently under construction at the European Organization for Nuclear Research CERN and scheduled to start operation in 2007. In the ATLAS Computing Model, after reduction of the data by the online trigger processor farms, the volume of data recorded for offline reconstruction and analysis will be of the order of 1PB (1015 bytes) per year.
The present proposal aims at a major contribution to the understanding, the development, the validation and the deployment of the ATLAS Computing Model that will be based on GRID technology. Part of the project will be to participate to the development of the software infrastructure needed for the production chain and the physics analysis. The conclusion of the work will be the demonstration of the full ATLAS data analysis chain in a large-scale GRID production infrastructure.
The study and validation of the Computing Model started two years ago and will continue for few years in the context of the so-called Data Challenges (DC). DC1 was conducted during 2002-2003; the main achieved goals were to put in place the production infrastructure in a real worldwide collaborative effort and to gain a lot of experience in exercising an ATLAS wide production model. The analysis part of the general Computing Model was not exercised in DC1.
The focus of DC2-3, planned for 2004-05, is on testing the Computing Model itself and most importantly the tier-0 part, which aims at the reconstruction of the events just after they come out of the data taking chain. It also include an analysis phase intended to test the distributed analysis parts of the Computing Model, the schedule of this phase is still under discussion since a LHC collaborative effort is being organized in the context of the LHC Computing Grid project (LCG).
Call for proposal
See other projects for this call