Remove lgamma and replace by TMath::LnGamma which should work across platforms.
[u/mrichter/AliRoot.git] / doc / OfflineBible.doc
7f14a051 1ÐÏ\11ࡱ\1aá\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0>\0\ 3\0þÿ \0\ 6\0\0\0\0\0\0\0\0\0\0\0B\0\0\0?\ 6\0\0\0\0\0\0\0\10\0\0C\ 6\0\0
2\0\0\0þÿÿÿ\0\0\0\02\ 6\0\03\ 6\0\04\ 6\0\05\ 6\0\06\ 6\0\07\ 6\0\08\ 6\0\09\ 6\0\0:\ 6\0\0;\ 6\0\0<\ 6\0\0=\ 6\0\0>\ 6\0\0€\ 6\0\0Ø\ 6\0\0Y\a\0\0â\a\0\0c\b\0\0ä\b\0\0e \0\0æ \0\0g
4\0\0i\v\0\0ê\v\0\0k\f\0\0ì\f\0\0m\r\0\0î\r\0\0o\ e\0\0ð\ e\0\0q\ f\0\0ò\ f\0\0s\10\0\0ô\10\0\0u\11\0\0ö\11\0\0w\12\0\0ø\12\0\0y\13\0\0ú\13\0\0{\14\0\0ü\14\0\0}\15\0\0þ\15\0\0\7f\16\0\0\0\17\0\0A\17\0\0Â\17\0\0C\18\0\0Ä\18\0\0E\19\0\0Æ\19\0\0G\1a\0\0È\1a\0\0I\e\0\0Ê\e\0\0K\1c\0\0\0\1d\0\0€\1d\0\0\0\1e\0\0€\1e\0\0\0\1f\0\0€\1f\0\0â\1f\0\0[ \0\0ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿì¥Á\0\a \ 4\0\0ø\12¿\0\0\0\0\0\ 1\11\0\ 1\0\ 1\0\b\0\0¯¼\ 5\0\ e\0bjbjoíoí\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 \ 4\16\0Œ
5\0\r‡\ 1\0\r‡\ 1\0Α\ 5\0\0\0\0\0G\0\0\0\0\0\0\0ß\0\0\0U\15\0\0\0\0\0\0\0\0\0\0ÿÿ\ f\0\0\0\0\0\0\0\0\0ÿÿ\ f\0\0\0\0\0\0\0\0\0ÿÿ\ f\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0·\0\0\0\0\0D(\0\0\0\0\0\0D(\0\0\ 56\0\0\0\0\0\0\ 56\0\0\0\0\0\0ç8\0\0ò\0\0\0W:\0\0$\0\0\0{:\0\04\0\0\0\0\0\0\0\0\0\0\0¯:\0\0\0\0\0\0¯:\0\0\0\0\0\0¯:\0\0\0\0\0\0¯:\0\0h\0\0\0\17;\0\0<\ 3\0\0S>\0\0L\b\0\0¯:\0\0\0\0\0\02U\ 3\0^\ 6\0\0ŸF\0\0\ 2\17\0\0¡]\0\0(\0\0\0É]\0\0\0\0\0\0É]\0\0\0\0\0\0o^\0\0\0\0\0\0"`\0\0\ 4\a\0\0&g\0\0\\ 2\0\0‚i\0\00\ 1\0\0ýS\ 3\0\ 2\0\0\0ÿS\ 3\0\0\0\0\0ÿS\ 3\0\0\0\0\0ÿS\ 3\0\0\0\0\0ÿS\ 3\0\0\0\0\0ÿS\ 3\0\0\0\0\0ÿS\ 3\0\0\0\0\0[\ 3\0¢\ 2\0\02^\ 3\0Þ\0\0\0ÿS\ 3\0í\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0¹8\0\0.\0\0\0²j\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0`\0\0"\0\0\0"`\0\0\0\0\0\0²j\0\0\ 4\0\0\0¶j\0\0\ 4\0\0\0ÿS\ 3\0\0\0\0\0Æ \ 1\0\0\0\0\0\ 56\0\0œ\ 1\0\0¡7\0\0\18\ 1\0\0É]\0\0¦\0\0\0\0\0\0\0\0\0\0\0o^\0\0‘\ 1\0\0ìT\ 3\0\16\0\0\0Æ \ 1\0\0\0\0\0Æ \ 1\0\0\0\0\0Æ \ 1\0\0\0\0\0ºj\0\0ªV\0\0¹8\0\0\0\0\0\0o^\0\0\0\0\0\0¹8\0\0\0\0\0\0o^\0\0\0\0\0\0ýS\ 3\0\0\0\0\0\0\0\0\0\0\0\0\0Æ \ 1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0²j\0\0\0\0\0\0ýS\ 3\0\0\0\0\0\0\0\0\0\0\0\0\0Æ \ 1\0\0\0\0\0Æ \ 1\0.\18\0\030\ 3\0H\11\0\0¹8\0\0\0\0\0\0¹8\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0½E\ 3\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0ï\ 4:½ö¹Ê\ 1\0\0\0\0\0\0\0\0¯:\0\0\0\0\0\0\0\0b_\0\0{A\ 3\0\10\ 2\0\0\0\0\0\0\0\0\0\0ÙE\ 3\0$\ e\0\0\ 2U\ 3\00\0\0\02U\ 3\0\0\0\0\0‹C\ 3\02\ 2\0\0\10_\ 3\0\0\0\0\0Æ \ 1\0\0\0\0\0\10_\ 3\0€\ 3\0\0½E\ 3\0\0\0\0\0Æ \ 1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0Ù9\0\0~\0\0\0½E\ 3\0\1c\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0²j\0\0\0\0\0\0²j\0\0\0\0\0\0²j\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0ÿÿÿÿ\0\0\0\0ÿÿÿÿ\0\0\0\0ÿÿÿÿ\0\0\0\0\0\0\0\0\0\0\0\0ÿÿÿÿ\0\0\0\0ÿÿÿÿ\0\0\0\0ÿÿÿÿ\0\0\0\0ÿÿÿÿ\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0D(\0\0‡\f\0\0Ë4\0\0:\ 1\0\0\a\0\f\ 1\ f\0\r\ 1\0\0 \ 4\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\rThe ALICE Offline Bible\rVersion 0.00\v(Rev.23)\r\rTable of content\f\13 TOC \141Table of content 2\r2Introduction 5\r2.1About this document 5\r2.2Acknowledgements 5\r2.3History of modifications 5\r3The AliRoot primer 6\r3.1About this primer 6\r3.2AliRoot framework 7\r3.3Installation and development tools 9\r3.3.1Platforms and compilers 9\r3.3.2Essential SVN information 9\r3.3.3Main SVN commands 10\r3.3.4Environment variables 11\r3.4Software packages 12\r3.4.1AliEn 12\r3.4.2ROOT 12\r3.4.3GEANT 3 13\r3.4.4GEANT 4 14\r3.4.5FLUKA 16\r3.4.6Virtual Monte Carlo 17\r3.4.7AliRoot 18\r3.4.8Running Benchmarks with Fluka and Geant4 18\r3.4.9Debugging 18\r3.4.10Profiling 19\r3.4.11Detection of run time errors 20\r3.4.12Useful information LSF and CASTOR 21\r3.5Simulation 22\r3.5.1Introduction 22\r3.5.2Simulation framework 24\r3.5.3Configuration: example of Config.C 27\r3.5.4Event generation 31\r3.5.5Particle transport 41\r3.6Reconstruction 45\r3.6.1Reconstruction Framework 45\r3.6.2Event summary data 53\r3.7Analysis 53\r3.7.1Introduction 53\r3.7.2Infrastructure tools for distributed analysis 56\r3.7.3Analysis tools 57\r3.8Data input, output and exchange subsystem of AliRoot 61\r3.8.1The “White Board” 62\r3.8.2Loaders 63\r3.9Calibration and alignment 65\r3.9.1Calibration framework 65\r3.10The Event Tag System 69\r3.10.1The Analysis Scheme 69\r3.10.2The Event Tag System 70\r3.10.3The Creation of the Tag-Files 71\r4Run and File Metadata for the ALICE File Catalogue 75\r4.1Introduction 75\r4.2Path name specification 75\r4.3File name specification 76\r4.4Files linked to from the ALICE File Catalogue? 77\r4.5Metadata 77\r4.5.1Run metadata 78\r4.5.2File metadata 79\r4.6Population of the database 79\r4.7Data safety and backup procedures 79\r5AliEn reference 80\r5.1What's this section about? 80\r5.2The client interface API 80\r5.3Installation of the client interface library package – gapi 81\r5.3.1Installation via the AliEn installer 81\r5.3.2Recompilation with your locally installed compiler 81\r5.3.3Source Installation using AliEnBits 81\r5.3.4The directory structure of the client interface 82\r5.4Using the Client Interface - Configuration 83\r5.5Using the Client Interface - Authentication 84\r5.5.1Token Location 84\r5.5.2File-Tokens 84\r5.6Session Token Creation 84\r5.6.1Token Creation using a GRID Proxy certificate – alien-token-init 85\r5.6.2Token Creation using a password – alien-token-init 86\r5.6.3Token Creation via AliEn Job Token – alien-token-init 86\r5.6.4Manual setup of redundant API service endpoints 86\r5.7Checking an API session token – alien-token-info 86\r5.8Destroying an API session token – alien-token-destroy 87\r5.8.1Session Environment Files 87\r5.9The AliEn Shell – aliensh 88\r5.9.1Shell Invocation 88\r5.9.2Shell Prompt 89\r5.9.3Shell History 89\r5.9.4Shell Environment Variables 89\r5.9.5Single Command Execution from a user shell 89\r5.9.6Script File Execution from a user shell 90\r5.9.7Script execution inside aliensh “run” 90\r5.9.8Basic aliensh Commands 90\r5.10The ROOT AliEn Interface 114\r5.10.1Installation of ROOT with AliEn support 114\r5.10.2ROOT Startup with AliEn support - a quick test 115\r5.10.3The ROOT TGrid/TAlien module 115\r5.11 Submitting multiple jobs. 120\r5.12Appendix JDL Syntax 122\r5.12.1JDL Tags 122\r5.13Appendix Job Status 124\r5.13.1Status Flow Diagram 125\r5.13.2Non-Error Status Explanation 125\r5.13.3Error Status Explanation 126\r6Distributed analysis 127\r6.1Abstract 127\r6.2Introduction 127\r6.3Flow of the analysis procedure 128\r6.4Analysis framework 130\r6.5Interactive analysis with local ESDs 136\r6.5.1Object based cut strategy 138\r6.5.2String based cut strategy 138\r6.6Interactive analysis with GRID ESDs 140\r6.7Batch analysis 143\r6.7.1Overview of the framework 144\r6.7.2Using the Event Tag System 145\r6.7.3Files needed 147\r6.7.4JDL syntax 148\r6.7.5Job submission - Job status 151\r6.7.6Merging the output 152\r6.8Run-LHC-Detector and event level cut member functions 152\r6.8.1Run level member functions 152\r6.8.2LHC level member functions 153\r6.8.3Detector level member functions 153\r6.8.4Event level member functions 153\r6.9String base object and event level tags 154\r6.9.1Variables for run cuts 154\r6.9.2Variables for LHC cuts 154\r6.9.3Variables for detector cuts 155\r6.9.4Variables for event cuts 155\r6.10Summary 156\r7Appendix 158\r7.1Kalman filter 158\r7.2Bayesian approach for combined particle identification 159\r7.2.1Bayesian PID with a single detector 159\r7.2.2PID combined over several detectors 160\r7.2.3Stability with respect to variations of the a priory probabilities 161\r7.2.4Features of the Bayesian PID 162\r7.3Vertex estimation using tracks 162\r7.4Alignment framework 163\r7.4.1Basic objects and alignment constants 163\r7.4.2Use of ROOT geometry functionality 168\r7.4.3Application of the alignment objects to the geometry 168\r7.4.4Access to the Conditions Data Base 169\r7.4.5Summary 170\r8Glossary 172\r9References 176\15\f\rIntroduction\rPurpose of this document.\rAbout this document\rHistory of the document, origin of the different parts and authors.\rAcknowledgements\rAll those who helped\rHistory of modifications\r\r#\aWho\aWhen\aWhat\a\a1\aF. Carminati\a30/1/07\aInitial merge of documents\a\a2\aF. Carminati\a19/3/07\aInserted MetaData note\a\a3\aG. Bruckner\a01/7/07\aFull check\a\a4\aA. Padee\a03/3/08\aFull check\a\a5\aE.Sicking\a27/2/09\aGeant4/Fluka\a\a6\aP.Hristov\a1/11/09\aConverted back to Word\a\a7\aP.Hristov\a2/3/10\aAdded to SVN\a\a\rThe AliRoot primer\r\rAbout this primer\rThe aim of this primer is to give some basic information about the ALICE offline framework (AliRoot) from the users’ perspective. We explain in detail the installation procedure, and give examples of some typical use cases: detector description, event generation, particle transport, generation of “summable digits”, event merging, reconstruction, particle identification, and generation of event summary data.\rThe primer also includes some examples of analysis, and short description of the existing analysis classes in AliRoot. An updated version of the document can be downloaded from:\r\13 HYPERLINK ""\ 1\14\15\rFor the reader interested by the AliRoot architecture and by the performance studies done so far, a good starting point is Chapter 4 of the ALICE Physics Performance Report [\ 2]. Another important document is the ALICE Computing Technical Design Report [\ 2].\rSome information contained there has been included in the present document, but most of the details have been omitted.\rAliRoot uses the ROOT [\ 2] system as a foundation on which the framework for simulation, reconstruction and analysis is built. The Geant3 [\ 2] or FLUKA [\ 2] packages perform the transport of particles through the detector and simulate the energy deposition from which the detector response can be simulated. Support for Geant4 [\ 2] transport package is coming soon.\rExcept for large existing libraries, such as Pythia6 [\ 2] and HIJING [\ 2], and some remaining legacy code, this framework is based on the Object Oriented programming paradigm, and is written in C++.\rThe following packages are needed to install the fully operational software distribution:\rROOT, available from \13 HYPERLINK ""\ 1\14\15 or using the ROOT SVN repository:\rAliRoot from the ALICE offline SVN repository: \r\rtransport packages:\rGEANT 3 is available from the ROOT SVN repository\rFLUKA library can be obtained after registration from \13 HYPERLINK ""\ 1\14\15\r GEANT~4 distribution from \13 HYPERLINK ""\ 1\14\15.\rThe access to the GRID resources and data is provided by the \13 HYPERLINK ""\ 1\14AliEn\15 [\ 2] system.\rThe installation details are explained in Section \13 REF _Ref35869610 \n \h \ 1\143.3\15.\rAliRoot framework\rIn HEP, a framework is a set of software tools that enables data processing. For example the old CERN Program Library was a toolkit to build a framework. PAW was the first example of integration of tools into a coherent ensemble, specifically dedicated to data analysis. The role of the framework is shown in \13 REF _Ref31191457 \h \ 1\14Figure 1\15.\r\ 1\rFigure \13 SEQ "Figure" \*Arabic \141\15: Data processing framework\rThe primary interactions are simulated via event generators, and the resulting kinematic tree is then used in the transport package. An event generator produces set of “particles” with their momenta. The set of particles, where one maintains the production history (in form of mother-daughter relationship and production vertex), forms the kinematic tree. More details can be found in the ROOT documentation of class TParticle. The transport package transports the particles through the set of detectors, and produces hits, which in ALICE terminology means energy deposition at a given point. The hits contain also information (“track labels”) about the particles that have generated them. In case of calorimeters (PHOS and EMCAL) the hit is the energy deposition in the whole active volume of a detecting element. In some detectors the energy of the hit is used only for comparison with a given threshold, for example in TOF and ITS pixel layers.\rAt the next step, the detector response is taken into account, and the hits are transformed into digits. As mentioned previously, the hits are closely related to the tracks that generated them. The transition from hits/tracks to digits/detectors is marked on the picture as “disintegrated response”, the tracks are “disintegrated” and only the labels carry the Monte Carlo information.\rThere are two types of digits: “summable digits”, where one uses low thresholds and the result is additive, and “digits”, where the real thresholds are used, and the result is similar to what one would get in the real data taking. In some sense the “summable digits” are precursors of the “digits”. The noise simulation is activated when “digits” are produced. There are two differences between “digits” and the “raw” data format produced by the detector: firstly, the information about the Monte Carlo particle generating the digit is kept as data member of the class AliDigit, and secondly, the raw data are stored in binary format as “payload” in a ROOT structure, while the digit\0s\0 \0a\0r\0e\0 \0s\0t\0o\0r\0e\0d\0 \0i\0n\0 \0R\0O\0O\0T\0 \0c\0l\0a\0s\0s\0e\0s\0.\0 \0T\0w\0o\0 \0c\0o\0n\0v\0e\0r\0s\0i\0o\0n\0 \0c\0h\0a\0i\0n\0s\0 \0a\0r\0e\0 \0p\0r\0o\0v\0i\0d\0e\0d\0 \0i\0n\0 \0A\0l\0i\0R\0o\0o\0t\0:\0\r\0h\0i\0t\0s\0 \0®ð \0s\0u\0m\0m\0a\0b\0l\0e\0 \0d\0i\0g\0i\0t\0s\0 \0®ð \0d\0i\0g\0i\0t\0s\0\r\0h\0i\0t\0s\0 \0®ð \0d\0i\0g\0i\0t\0s\0\r\0T\0h\0e\0 \0s\0u\0m\0m\0a\0b\0l\0e\0 \0d\0i\0g\0i\0t\0s\0 \0a\0r\0e\0 \0u\0s\0e\0d\0 \0f\0o\0r\0 \0t\0h\0e\0 \0s\0o\0-\0c\0a\0l\0l\0e\0d\0 \0\1c e\0v\0e\0n\0t\0 \0m\0e\0r\0g\0i\0n\0g\0\1d ,\0 \0w\0h\0e\0r\0e\0 \0a\0 \0s\0i\0g\0n\0a\0l\0 \0e\0v\0e\0n\0t\0 \0i\0s\0 \0e\0m\0b\0e\0d\0d\0e\0d\0 \0i\0n\0 \0a\0 \0s\0i\0g\0n\0a\0l\0-\0f\0r\0e\0e\0 \0u\0n\0d\0e\0r\0l\0y\0i\0n\0g\0 \0e\0v\0e\0n\0t\0.\0 \0T\0h\0i\0s\0 \0t\0e\0c\0h\0n\0i\0q\0u\0e\0 \0i\0s\0 \0w\0i\0d\0e\0l\0y\0 \0u\0s\0e\0d\0 \0i\0n\0 \0h\0e\0a\0v\0y\0-\0i\0o\0n\0 \0p\0h\0y\0s\0i\0c\0s\0 \0a\0n\0d\0 \0a\0l\0l\0o\0w\0s\0 \0r\0e\0u\0s\0i\0n\0g\0 \0t\0h\0e\0 \0u\0n\0d\0e\0r\0l\0y\0i\0n\0g\0 \0e\0v\0e\0n\0t\0s\0 \0w\0i\0t\0h\0 \0s\0u\0b\0s\0t\0a\0n\0t\0i\0a\0l\0 \0e\0c\0o\0n\0o\0m\0y\0 \0o\0f\0 \0c\0o\0m\0p\0u\0t\0i\0n\0g\0 \0r\0e\0s\0o\0u\0r\0c\0e\0s\0.\0 \0O\0p\0t\0i\0o\0n\0a\0l\0l\0y\0,\0 \0i\0t\0 \0i\0s\0 \0p\0o\0s\0s\0i\0b\0l\0e\0 \0t\0o\0 \0p\0e\0r\0f\0o\0r\0m\0 \0t\0h\0e\0 \0c\0o\0n\0v\0e\0r\0s\0i\0o\0n\0\r\0d\0i\0g\0i\0t\0s\0 \0®ð \0r\0a\0w\0 \0d\0a\0t\0a\0\r\0w\0h\0i\0c\0h\0 \0i\0s\0 \0u\0s\0e\0d\0 \0t\0o\0 \0e\0s\0t\0i\0m\0a\0t\0e\0 \0t\0h\0e\0 \0e\0x\0p\0e\0c\0t\0e\0d\0 \0d\0a\0t\0a\0 \0size, to evaluate the high level trigger algorithms and to carry on the so called computing data challenges. The reconstruction and the HLT algorithms can both work with digits and with raw data. There is also the possibility to convert the raw data between the following formats: the format coming from the front-end electronics (FEE) through the detector data link (DDL), the format used in the data acquisition system (DAQ) and the “rootified” format. More details are given in section \13 REF _Ref35869642 \n \h \ 1\143.5\15.\rAfter the creation of digits, the reconstruction and analysis chains can be activated to evaluate the software and the detector performance, and to study some particular signatures. The reconstruction takes as input digits or raw data, real or simulated.\rThe user can intervene into the cycle provided by the framework and replace any part of it with his own code or implement his own analysis of the data. I/O and user interfaces are part of the framework, as are data visualization and analysis tools and all procedures that are considered of general interest to be introduced into the framework. The scope of the framework evolves with time as do the needs of the physics community.\rThe basic principles that have guided the design of the AliRoot framework are reusability and modularity. There are almost as many definitions of these concepts as there are programmers. However, for our purpose, we adopt an operative heuristic definition that expresses our objective to minimize the amount of unused or rewritten code and maximize the participation of the physicists in the development of the code.\rModularity allows replacement of parts of our system with minimal or no impact on the rest. We do not expect to replace every part of our system. Therefore we focus on modularity directed at those elements that we expect to change. For example, we require the ability to change the event generator or the transport Monte Carlo without affecting the user code. There are elements that we do not plan to subject to major modifications, but rather to evolve them in collaboration with their authors such as the ROOT I/O subsystem or the ROOT User Interface (UI). Whenever an element is chosen to become a modular one, we define an abstract interface to it. Code contributions from different detectors are independent so that different detector groups can work concurrently on the system while minimizing the interference. We understand and accept the risk that, at some point, the need may arise to modularize a component that was not initially designed to modular. For these cases, we have elaborated a development strategy that can handle design changes in production code.\rReusability is the protection of the investment made by the physicist programmers of ALICE. The code embodies a large amount of scientific knowledge and experience and thus is a precious resource. We preserve this investment by designing a modular system in the sense above and by making sure that we maintain the maximum amount of backward compatibility while evolving our system. This naturally generates requirements on the underlying framework prompting developments such as the introduction of automatic schema evolution in ROOT.\rSupport of the AliRoot framework is a collaborative effort within the ALICE experiment. Questions, suggestions, topics for discussion and messages are exchanged in the mailing list Bug reports and tasks are submitted on the Savannah page\rInstallation and development tools\rPlatforms and compilers\rThe main development and production platform is Linux on Intel 32 bit processors. The official Linux [\ 2] distribution at CERN is Scientific Linux SLC [\ 2]. The code works also on RedHat [\ 2] version 7.3, 8.0, 9.0, Fedora Core [\ 2] 1 – 11, and on many other Linux distributions. The main compiler on Linux is gcc [\ 2]: the recommended version is gcc 3.4.6-4.4.2. Older releases (2.91.66, 2.95.2, 2.96) have problems in the FORTRAN optimization that has to be switched off for all FORTRAN packages. AliRoot can be used with gcc 4.0.X where the FORTRAN compiler g77 is replaced by g95, but g95 is considered obsolete. The last release series of gcc (4.1) work with gfortran as well. As an option you can use Intel icc [\ 2] compiler, which is also supported. You can download it from \13 HYPERLINK ""\ 1\14\15 and use it free of charge for non-commercial projects. Intel also provides free of charge the VTune [\ 2] profiling tool which is one of the best available so far.\rAliRoot is supported on Intel 64 bit processors [\ 2] running Linux. Both the gcc and Intel icc compilers can be used.\rOn 64 bit AMD [\ 2] processors, such as Opteron, AliRoot runs successfully with the gcc compiler.\rThe software is also regularly compiled and run on other Unix platforms. On Sun (SunOS 5.8) we recommend the CC compiler Sun WorkShop 6 update 1 C++ 5.2. The WorkShop integrates nice debugging and profiling facilities that are very useful for code development.\rOn Compaq alpha server (Digital Unix V4.0) the default compiler is cxx (Compaq C++ V6.2-024 for Digital UNIX V4.0F). Alpha also provides its profiling tool pixie, which works well with shared libraries. AliRoot works also on alpha server running Linux, where the compiler is gcc. The Compaq alpha server is not supported anymore.\rRecently AliRoot was ported to MacOS (Darwin). This OS is very sensitive to the circular dependences in the shared libraries, which makes it very useful as test platform.\rEssential SVN information\rSVN stands for Subversion - it is a version control system that enables a group of people to work together on a set of files (for instance program sources). It also records the history of files, which allows backtracking and file versioning. Subversion succeded Concurrent Version Sysytem, and therefore is mostly-compatible to CVS. \rThe official SVN Web page is . \rSVN has a host of features, among them the most important are:\rSVN facilitates parallel and concurrent code development;\rit provides easy support and simple access;\rSVN has rich set of commands, the most important are described below. There exist several tools for visualization, logging and control that work with SVN. More information is available in the SVN documentation and manual [\ 2].\rUsually the development process with SVN has the following features:\rAll developers work on their own copy of the project (in one of their directories);\rThey often have to synchronize with a global repository both to update with modifications from other people and to commit their own changes;\rIn case of conflicts, it is the developer’s responsibility to resolve the situation, because the SVN tool can only perform a purely mechanical merge.\rInstructions of using Aliroot Subversion can be found on: \13 HYPERLINK ""\ 1\14\15\r\rMain SVN commands\rsvn checkout — Check out a working copy from a repository.\r% svn co AliRoot\rsvn update — Update your working copy. This command should be called from inside the working directory to update it. The first character in line for each updated item, stands for the action taken for this item. The character have the following meaning:\rA - added\rD - deleted\rU - updated\rC - conflict\rG - merged\r% svn update\rsvn diff — Display the differences between two paths.\r% svn diff -r 20943 Makefile \rsvn add - Add files, directories, or symbolic links.\r% svn -qz9 add AliTPCseed.*\rsvn delete - Delete an item from a working copy or repository.\r% svn delete -f CASTOR\rsvn commit checks in the local modifications to the repository, send changes of the working copy and increments the version numbers of the files. In the example below all the changes made in the different files of the module STEER will be committed to the repository. The -m option is followed by the log message. In case you don't provide it an editor window will prompt you to enter your description. No commit is possible without the log message that explains what was done.\r% svn ci newname.cxx \rsvn status — Print the status of working copy files and directories.With --show-updates, add working revision and server out-of-date information. With --verbose, print full revision information on every item.\r% svn status Makefile\r\rEnvironment variables\rBefore the installation of AliRoot, the user has to set certain environment variables. In the following examples, we assume that the user is working on Linux and the default shell is bash. It is sufficient to add to ~/.bash_profile the lines shown below:\r\r# ROOT\rexport ROOTSYS=<ROOT installation directory>\rexport PATH=$PATH\:$ROOTSYS/bin\rexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH\:$ROOTSYS/lib\r\r# AliRoot\rexport ALICE=<AliRoot installation directory>\rexport ALICE_ROOT=$ALICE/AliRoot\rexport ALICE_TARGET=`root-config --arch`\rexport PATH=$PATH\:$ALICE_ROOT/bin/tgt_${ALICE_TARGET}\rexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH\:$ALICE_ROOT/lib/tgt_${ALICE_TARGET}\r\r# Geant3\rexport PLATFORM=`root-config –arch`\r\r# Optional, defined otherwise in Geant3 Makefile\rexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH\:$ALICE/geant3/lib/tgt_${ALICE_TARGET}\r\r# optional: if you want to use Fluka\r# FLUKA\rexport FLUPRO=$ALICE/fluka # $FLUPRO is used in TFluka\rexport PATH=$PATH\:$FLUPRO/flutil\r#Fluka vmc\rexport FLUVMC=$ALICE/fluka_vmc\rexport FLUKALIB=$ALICE/fluka_vmc/lib/tgt_$ALICE_TARGET\rexport LD_LIBRARY_PATH=$FLUKALIB/\:$LD_LIBRARY_PATH\r\r# optional: if you want to use Geant4\r# Geant4: \rexport CLHEP_BASE_DIR=$ALICE/CLHEP\rexport G4INSTALL=$ALICE/geant4\rexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH\:$G4INSTALL/include\rexport G4LIB_BUILD_SHARED=1\rexport G4LIB_USE_G3TOG4=1\rexport CLHEP_INCLUDE_DIR=$ALICE/CLHEP/include\rexport CLHEP_LIB_DIR=$ALICE/CLHEP/lib\rexport CLHEP_LIB=$ALICE/CLHEP\r. $ALICE/geant4/.config/bin/Linux-g++/ \r# script will be generated during Geant4 compilation\r\r#Geant4 vmc\rexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH\:$CLHEP_LIB_DIR\rexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH\:$ALICE/geant4_vmc/lib/tgt_${ALICE_TARGET}\r\rThe meaning of the environment variables is the following:\rROOTSYS directory of the ROOT package;\rALICE top directory for all software packages used in ALICE;\rALICE_ROOT directory where the AliRoot package is located, usually a subdirectory of ALICE;\rALICE_TARGET specific platform name. Up to release v4-01-Release this variable was set to the result of “uname” command. Starting from AliRoot v4-02-05 the ROOT naming schema was adopted, and the user has to use the “root-config --arch” command in order to know the proper value.\rPLATFORM same as ALICE_TARGET for the GEANT 3 package. Until GEANT 3 v1-0, the user had to use `uname` to specify the platform. From version v1-0 on, the ROOT platform is used instead ("root-config --arch"). This environment variable is set by default in the Geant 3 Makefile.\rSoftware packages\rAliEn\rThe installation of AliEn is the first one to be done if you plan to access the GRID or need GRID-enabled ROOT. You can download the AliEn installer and use it in the following way:\rGet the installer:\r% wget\r% chmod +x alien-installer\r% ./alien-installer\r\rThe alien-installer runs a dialog that prompts for the default selection and options. The default installation place for AliEn is /opt/alien, and the typical packages one has to install are “client” and “gshell”.\rROOT\rAll ALICE offline software is based on ROOT [\13 PAGEREF _RefE3 \h \ 1\14178\15]. The ROOT framework offers a number of important elements that are exploited in AliRoot:\ra complete data analysis framework including all the PAW features;\ran advanced Graphic User Interface (GUI) toolkit;\ra large set of utility functions, including several commonly used mathematical functions, random number generators, multi-parametric fit and minimization procedures;\ra complete set of object containers; \rintegrated I/O with class schema evolution;\rC++ as a scripting language;\rdocumentation tools.\rThere is a useful ROOT user's guide that incorporates important and detailed information. For those who are not familiar with ROOT, a good starting point is the ROOT Web page at \13 HYPERLINK ""\ 1\14\15. Here, the experienced user may find easily the latest version of the class descriptions and search for useful information.\rThe recommended way to install ROOT is from the SVN sources, as it is shown below:\rGet a specific version (>= 2.25/00), e.g.: version 2.25/03:\rprompt% svn co root\rAlternatively, checkout the head (development version) of the sources:\rprompt% svn co root\rIn both cases you should have a subdirectory called "root" in the directory you ran the above commands in.\rThe appropriate combinations of ROOT, Geant 3 and AliRoot versions can be found at\rThe code is stored in the directory “root”. You have to go there, set the ROOTSYS environment variable (if this is not done in advance), and configure ROOT. The ROOTSYS contains the full path to the ROOT directory. \rROOT Configuration\r#!/bin/sh\r\rcd root\rexport ROOTSYS=`pwd`\r\rALIEN_ROOT=/opt/alien\r\r./configure \\r --with-f77=gfortran \\r --with-pythia6-uscore=SINGLE \\r --enable-cern --enable-rfio \\r --enable-mathmore --enable-mathcore --enable-roofit \\r --enable-asimage --enable-minuit2 \\r --enable-alien --with-alien-incdir=${ALIEN_ROOT}/api/include \\r --with-alien-libdir=${ALIEN_ROOT}/api/lib\rIf you want to use Fluka and Geant4, please add the following lines to the before compiling ROOT\r# for Fluka\r --with-f77=g77 \\r# for Geant4: not needed anymore\r --enable-g4root\\r --with-g4-incdir=$G4INSTALL/include \\r --with-g4-libdir=$G4INSTALL/lib/Linux-g++ \\r --with-clhep-incdir=$CLHEP_BASE_DIR/include\rIf you want to use Fluka, please use a 32 bit system. If this is not possible, compile everything in 32 bit. Starting with root using\r./configure linux\\rNow you can compile and test ROOT\rCompiling and testing ROOT\r#!/bin/sh\r\rmake\rmake map\rcd test\rmake\rexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH:.\rexport PATH=$LD_LIBRARY_PATH:.\r./stress\r./stressFit\r./stressGeometry\r./stressGraphics\r./stressHepix\r./stressLinear\r./stressShapes\r./stressSpectrum\r./stressVector\r\rAt this point the user should have a working ROOT version on a Linux (32 bit Pentium processor with gcc compiler). The list of supported platforms can be obtained by “./configure --help” command.\rGEANT 3\rThe installation of GEANT~3 is needed, because it currently is the default particle transport package. A GEANT~3 description is available at \13 HYPERLINK ""\ 1\14\15.\rYou can download the GEANT 3 distribution from the ROOT SVN repository and compile it in the following way:\rmake GEANT 3\r  cd $ALICE \v   svn co geant3 \v   cd $ALICE/geant3 \v   export PLATFORM=`root-config --arch` \v  make \rPlease note that GEANT 3 is downloaded into the $ALICE directory. Another important feature is the PLATFORM environment variable. If it is not set, the Geant 3 Makefile will set it to the result of `root-config --arch`.\rGEANT 4\rTo use GEANT 4 [\13 PAGEREF _RefE6 \h \ 1\14178\15], some additional software has to be installed. GEANT 4 needs the CLHEP [\ 2] package, the user can get the tar file (or “tarball”) from \13 HYPERLINK ""\ 1\14\15. Choose the latest version of “source”. Then, the installation can be done in the following way:\rmake CLHEP\rtar zxvf clhep-\rcd\r./configure --prefix=$ALICE/CLHEP # Select the place to install CLHEP\rmake\rmake check\rmake install\r\rAnother possibility is to use the CLHEP CVS repository:\rmake CLHEP from CVS\rcvs -d login\r# Empty password\rcvs -d \\r co -r CLHEP_2_0_3_1 CLHEP\rcd CLHEP\r./bootstrap\r./configure --prefix=$ALICE/CLHEP # Select the place to install CLHEP\rmake\rmake check\rmake install\r\rNow the following lines should be added to ~/.bash_profile:\r\rexport CLHEP_BASE_DIR=$ALICE/CLHEP\rexport CLHEP_INCLUDE_DIR=$ALICE/CLHEP/include\rexport CLHEP_LIB_DIR=$ALICE/CLHEP/lib\r\rThe next step is to install GEANT 4. The GEANT 4 distribution is available from \13 HYPERLINK ""\ 1\14\15. Typically the following files will be downloaded (the current versions may differ from the ones below):\rgeant4.9.2.p02.tar.gz: source tarball\rG4NDL.3.13.tar.gz: G4NDL version 3.13 neutron data files with thermal cross sections\rG4EMLOW.6.2.tar.gz: data files for low energy electromagnetic processes - version 6.2\rPhotonEvaporation.2.0.tar.gz: data files for photon evaporation - version 2.0\rRadiativeDecay.3.2.tar.gz: data files for radioactive decay hadronic processes - version 3.2\rThen the following steps have to be executed:\rmake GEANT4\rtar zxvf geant4.9.2.p02.tar.gz\rcd geant4.9.2.p02\rmkdir data\rcd data\rtar zxvf ../../G4NDL.3.13.tar.gz\rtar zxvf ../../G4EMLOW.6.2.tar.gz\rtar zxvf ../../PhotonEvaporation.2.0.tar.gz\rtar zxvf ../../RadiativeDecay.3.2.tar.gz\rcd ..\r\r# Configuration and compilation\r\r./Configure -build\r\rAs answer choose the default value, except of the following:\r\rGeant4 library path: $ALICE/geant4\rCopy all Geant4 headers in one directory? Y\rCLHEP location: $ALICE/CLHEP\rbuild 'shared' (.so) libraries? Y\rG4VIS_BUILD_OPENGLX_DRIVER and G4VIS_USE_OPENGLX Y\rG4LIB_BUILD_G3TOG4 Y\rNow a long compilation...\r\rFor installation in the selected place ($ALICE/geant4):\r./Configure -install\rcd $G4INSTALL/source\rmake\rmake includes \r# this copies headers \r# from $ROOTSYS/montecarlo/g4root/inc/ \r# and all files from include directories in $G4INSTALL/\r# into $G4INSTALL/include\r\rEnvironment variables - The execution of the script can be done from the ~/.bash_profile to have the GEANT4 environment variables initialized automatically. Please note the "dot" in the beginning:\r# The <platform> has to be replaced by the actual value\r. $ALICE/geant4/src/geant4/.config/bin/<platform>/\r\rFLUKA\rSo far, Fluka is available in a 32bit version. In order to use Fluka and fluka_vmc, the user has to install ROOT, and AliRoot also in 32bit. An 64bit version will be available soon (2010?).\rThe installation of FLUKA [\13 PAGEREF _RefE5 \h \ 1\14178\15] consists of the following steps:\rregister as FLUKA user at if you have not yet done so. You will receive your “fuid” number and will set you password;\rdownload the latest FLUKA version from \13 HYPERLINK ""\ 1\14\15. Use your “fuid” registration and password when prompted. You will obtain a tarball containing the FLUKA libraries, for example fluka2008.3b-linuxAA.tar.gz .\rinstall the libraries;\rinstall FLUKA\r# Make fluka subdirectory in $ALICE\rcd $ALICE\rmkdir fluka\r\r# Unpack the FLUKA libraries in the $ALICE/fluka directory.\r# Please set correctly the path to the FLUKA tarball.\rcd fluka\rtar zxvf <path_to_fluka_tarball>/fluka-2008.3b-1.i386.rpm\r\r# Set the environment variables\rexport FLUPRO=$ALICE/fluka\rmake\r\rrun AliRoot using FLUKA;\r% cd $ALICE_ROOT/TFluka/scripts\r% ./\rThis script creates the directory ‘tmp’ as well as all the necessary links for data and configuration files, and starts AliRoot. For the next run, it is not necessary to run the script again. The ‘tmp’ directory can be kept or renamed. The user should run AliRoot from within this directory.\rFrom the AliRoot prompt start the simulation;\rroot [0] AliSimulation sim;\rroot [1] sim.Run();\rYou will get the results of the simulation in the ‘tmp’ directory.\rreconstruct the simulated event;\r% cd tmp\r% aliroot\rand from the AliRoot prompt\rroot [0] AliReconstruction rec;\rroot [1] rec.Run();\rreport any problem you encounter to the offline list \13 HYPERLINK ""\ 1\\15.\r\rVirtual Monte Carlo\rSo far, the AliRoot framework used the Monte Carlo GEANT3 for the particle transport, which is not maintained anymore. GEANT3 is intended to be replaced in the future by another Monte Carlo. In order to protect the user code from further changes of the transport Monte Carlo, a Virtual Monte Carlo (VMC) was developed by the ALICE Offline project. The VMC provides an abstract interface to the transport codes. It is implemented for GEANT3, Fluka and GEANT4. The Virtual Monte Carlo (VMC) allows to run different simulation Monte Carlo without changing the user code and therefore the input and output format as well as the geometry and detector response definition. Further explanations will be given in Section \13 REF _Ref35869642 \n \h \ 1\143.5\15.\rTo use Fluka and Geant4 within AliRoot, fluka_vmc and geant4_vmc has also to be installed.\r\r fluka_vmc\rAs mentioned above, ROOT has to be configured with this option: \r--with-f77=g77\rto make sure that fluka_vmc will be build with the Fortran compiler supported by Fluka (gcc/g77). \rSet environment\rexport FLUVMC=$ALICE/fluka_vmc\rexport FLUKALIB=$ALICE/fluka_vmc/lib/tgt_$ALICE_TARGET\rexport LD_LIBRARY_PATH=$FLUKALIB/\:$LD_LIBRARY_PATH\r\rDownload and install fluka_vmc:\rcd $ALICE\rsvn co fluka_vmc\rcd fluka_vmc/source\rmake\r\r geant4_vmc\rROOT has to be configured with g4root package; for this you have to specify the following configure options: \r--enable-g4root \\r--with-g4-incdir=$G4INSTALL/include \\r--with-g4-libdir=$G4INSTALL/lib/Linux-g++ \\r--with-clhep-incdir=$CLHEP_BASE_DIR/include \ \rNow the following lines should be added to ~/.bash_profile:\rexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH\:$CLHEP_LIB_DIR\rexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH\:$ALICE/geant4_vmc/lib/tgt_${ALICE_TARGET}\rDownload and installation of geant4_vmc\rcd $ALICE\rsvn co geant4_vmc\rcd geant4_vmc/source\rmake\r\rAliRoot\rThe AliRoot distribution is taken from the SVN repository and then\r\r1  % cd $ALICE \v   % svn co AliRoot \v   % cd $ALICE_ROOT \v   % make \r\rThe AliRoot code (the above example retrieves the HEAD version from CVS) is contained in $ALICE_ROOT directory. The $ALICE_TARGET is defined automatically in the .bash_profile} via the call to `root-config –arch`.\rRunning Benchmarks with Fluka and Geant4\rRun benchmarks in $ALICE_ROOT/test with Fluka:\rSet symbolic links to data and configuration files, which are also placed in the TFluka/tmp directory, in the current directory, e.g. ppbench:\rln -s $ALICE_ROOT/TFluka/input/coreFlukaVmc.inp coreFlukaVmc.inp\rln -s $FLUPRO/neuxsc-ind_260.bin neuxsc.bin\rln -s $FLUPRO/random.dat random.dat\rSelect in Config.C fluka option.\rStart simulation like it is done with GEANT3.\r\rRun benchmarks in $ALICE_ROOT/test with GEANT4:\rCopy geometry.root from same simulation with GEANT3\rSelect in Config.C Geant4 option.\rStart simulation like it is done in GEANT3 case.\rDebugging\rWhile developing code or running some ALICE program, the user may be confronted with the following execution errors:\rFloating exceptions: division by zero, sqrt from negative argument, assignment of NaN, etc.\rSegmentation violations/faults: attempt to access a memory location which it is not allowed to access by the operating system.\rBus error: attempt to access memory that the computer cannot address.\rIn this case, the user will have to debug the programme in order to determine the source of the problem and fix it. There are several debugging techniques, which are briefly listed below:\rusing printf(...), std::cout, assert(...) and AliDebug.\roften this is the only easy way to find the origin of the problem;\rassert(...) aborts the program execution if the argument is FALSE. It is a macro from cassert, it can be inactivated by compiling with -DNDEBUG.\rusing the GNU Debugger (gdb), see \13 HYPERLINK ""\ 1\14\15\rgdb needs compilation with -g option. Sometimes -O2 -g prevents from exact tracing, so it is save to use compilation with -O0 -g for debugging purposes;\rOne can use it directly (gdb aliroot) or attach it to a process (gdb aliroot 12345 where 12345 is the process id).\rBelow we report the main gdb commands and their descriptions:\rrun starts the execution of the program;\rControl-C stops the execution and switches to the gdb shell;\rwhere <n> prints the program stack. Sometimes the program stack is very long. The user can get the last n frames by specifying n as a parameter to where;\rprint prints the value of a variable or expression;\r(gdb) print *this\rup and down are used to navigate through the program stack;\rquit exits the gdb session;\rbreak sets break point;\r(gdb) break AliLoader.cxx:100\r(gdb) break 'AliLoader::AliLoader()'\rThe automatic completion of the class methods via tab is available in case an opening quote (`) is put in front of the class name.\rcont continues the run;\rwatch sets watchpoint (very slow execution). The example below shows how to check each change of fData;\r(gdb) watch *fData\rlist shows the source code;\rhelp shows the description of commands.\rProfiling\rProfiling is used to discover where the program spends most of the time, and to optimize the algorithms. There are several profiling tools available on different platforms:\rLinux tools:\r gprof: compilation with -pg option, static libraries\r oprofile: uses kernel module\r VTune: instruments shared libraries.\rSun: Sun workshop (Forte agent). It needs compilation with profiling option (-pg)\rCompaq Alpha: pixie profiler. Instruments shared libraries for profiling.\rOn Linux AliRoot can be built with static libraries using the special target “profile”\r\r% make profile\r# change LD_LIBRARY_PATH to replace lib/tgt_linux with lib/tgt_linuxPROF\r# change PATH to replace bin/tgt_linux with bin/tgt_linuxPROF\r% aliroot\rroot [0] gAlice->Run()\rroot [1] .q\r\rAt the end of an AliRoot session, a file called ‘gmon.out’ will be created. It contains the profiling information that can be investigated using gprof.\r\r% gprof `which aliroot` | tee gprof.txt\r% more gprof.txt\r\rVTune profiling tool\rVTune is available from the Intel Web site \13 HYPERLINK ""\ 1\14\15. It is free for non-commercial use on Linux. It provides possibility for call-graph and sampling profiling. VTune uses shared libraries and is enabled via the ‘-g’ option during compilation. Here is an example of call-graph profiling:\r\r# Register an activity\r% vtl activity sim -c callgraph -app aliroot," -b -q sim.C" -moi aliroot\r% vtl run sim\r% vtl show\r% vtl view sim::r1 -gui\r\rProfiling with Valgrind\rRecently Valgrind provided a tool that can do profiling of binaries with shared libraries without any recompilation.\r\r# Profiling of reconstruction\r % valgrind --tool=callgring aliroot –b –q rec.C\r % kcachegrind callgrind.out.<proc_id>\r\r\rDetection of run time errors\rThe Valgrind tool can be used for detection of run time errors on linux. It is available from \13 HYPERLINK ""\ 1\14\15. Valgrind is equipped with the following set of tools which are important for debugging AliRoot:\rmemcheck for memory management problems;\rcachegrind: cache profiler;\rmassif: heap profiler;\rHere is an example of Valgrind usage:\r\r% valgrind --tool=addrcheck --error-limit=no aliroot -b -q sim.C\r\rROOT memory checker\rThe ROOT memory checker does not work with the current version of ROOT. This section is for reference only.\rThe ROOT memory checker provides tests of memory leaks and other problems related to new/delete. It is fast and easy to use. Here is the recipe:\rlink aliroot with -lNew. The user has to add ‘--new’ before ‘--glibs’ in the ROOTCLIBS variable of the Makefile;\rRoot.MemCheck: 1 in .rootrc\rrun the program: aliroot -b -q sim.C\rrun memprobe -e aliroot\rInspect the files with .info extension that have been generated.\rUseful information LSF and CASTOR\rThe information in this section is included for completeness: users are strongly advised to rely on the GRID tools for productions and data access.\rLSF (Load Sharing Facility) is the batch system at CERN. Every user is allowed to submit jobs to the different queues. Usually the user has to copy some input files (macros, data, executables, libraries) from a local computer or from the mass-storage system to the worker node on lxbatch, to execute the program, and to store the results on the local computer or in the mass-storage system. The methods explained in the section are suitable, if the user doesn't have direct access to a shared directory, for example on AFS. The main steps and commands are described below.\rIn order to have access to the local desktop and to be able to use scp without password, the user has to create pair of SSH keys. Currently lxplus/lxbatch uses RSA1 cryptography. After a successful login to lxplus, the following has to be done:\r\r% ssh-keygen -t rsa1\r# Use empty password\r% cp .ssh/ public/authorized_keys\r% ln -s ../public/authorized_keys .ssh/authorized_keys\r\rA list of useful LSF commands is given below:\rbqueues – shows available queues and their status;\rbsub -q 8nm – submits the shell script ‘’ to the queue 8nm, where the name of the queue indicates the “normalized CPU time” (maximal job duration 8 min of normalized CPU time);\rbjobs – lists all unfinished jobs of the user;\rlsrun -m lxbXXXX xterm – returns a xterm running on the batch node lxbXXXX. This allows you to inspect the job output and to debug a batch job.\rEach batch job stores the output in directory ‘LSFJOB_XXXXXX’, where ‘XXXXXX’ is the job id. Since the home directory is on AFS, the user has to redirect the verbose output, otherwise the AFS quota might be exceeded and the jobs will fail.\rThe CERN mass storage system is CASTOR2 [\ 2]. Every user has his/her own CASTOR2 space, for example /castor/\rThe commands of CASTOR2 start with prefix “ns” of “rf”. Here is very short list of useful commands:\rnsls /castor/ lists the CASTOR space of user phristov;\rrfdir /castor/ the same as above, but the output is in long format;\rnsmkdir test creates a new directory ‘test’ in the CASTOR space of the user;\rrfcp /castor/ copies the file from CASTOR to the local directory. If the file is on tape, this will trigger the stage-in procedure, which might take some time.\rrfcp AliESDs.root /castor/ copies the local file ‘AliESDs.root’ to CASTOR in the subdirectory ‘test’ and schedules it for migration to tape.\rThe user also has to be aware, that the behavior of CASTOR depends on the environment variables RFIO_USE_CASTOR_V2 (=YES), STAGE_HOST (=castoralice) and STAGE_SVCCLASS (=default). They are set by default to the values for the group (z2 in case of ALICE).\rBelow the user can find a job example, where the simulation and reconstruction are run using the corresponding macros ‘sim.C’ and ‘rec.C’.\rAn example of such macros will be given later.\rLSF example job\r#! /bin/sh\r# Take all the C++ macros from the local computer to the working directory \rcommand scp phristov@pcepalice69:/home/phristov/pp/*.C .\r\r# Execute the simulation macro. Redirect the output and error streams\rcommand aliroot -b -q sim.C > sim.log 2>&1\r\r# Execute the reconstruction macro. Redirect the output and error streams\rcommand aliroot -b -q rec.C > rec.log 2>&1\r\r# Create a new CASTOR directory for this job ($LSB_JOBID)\rcommand rfmkdir /castor/$LSB_JOBID\r\r# Copy all log files to CASTOR\rfor a in *.log; do rfcp $a /castor/$LSB_JOBID; done\r# Copy all ROOT files to CASTOR\rfor a in *.root; do rfcp $a /castor/$LSB_JOBID; done\r\rSimulation\rIntroduction\rHeavy-ion collisions produce a very large number of particles in the final state. This is a challenge for the reconstruction and analysis algorithms. Detector design and development of these algorithms require a predictive and precise simulation of the detector response. Model predictions, as discussed in the first volume of Physics Performance Report for the charged multiplicity at LHC in Pb–Pb collisions, vary from 1400 to 8000 particles in the central unit of rapidity. The experiment was designed when the highest available nucleon–nucleon center-of-mass energy heavy-ion interactions was at 20 GeV per nucleon–nucleon pair at CERN SPS, i.e. a factor of about 300 less than the energy at LHC. Recently, the RHIC collider came online. Its top energy of 200 GeV per nucleon–nucleon pair is still 30 times less than the LHC energy. The RHIC data seem to suggest that the LHC multiplicity will be on the lower side of the interval. However, the extrapolation is so large that both the hardware and software of ALICE have to be designed for the highest multiplicity. Moreover, as the predictions of different generators of heavy-ion collisions differ substantially at LHC energies, we have to use several of them and compare the results.\rThe simulation of the processes involved in the transport through the detector of the particles emerging from the interaction is confronted with several problems:\rExisting event generators give different answers on parameters such as expected multiplicities, $p_T$-dependence and rapidity dependence at LHC energies.\rMost of the physics signals, like hyperon production, high-pt phenomena, open charm and beauty, quarkonia etc. are not exactly reproduced by the existing event generators.\rSimulation of small cross-sections would demand prohibitively high computing resources to simulate a number of events that is commensurable with the expected number of detected events in the experiment.\rExisting generators do not provide for event topologies like momentum correlations, azimuthal flow etc.\rNevertheless, to allow efficient simulations, we have adopted a framework that allows for a number of options:\rThe simulation framework provides an interface to external generators, like HIJING [\13 PAGEREF _RefE8 \h \ 1\14178\15] and DPMJET [\ 2].\rA parameterized, signal-free, underlying event where the produced multiplicity can be specified as an input parameter is provided.\rRare signals can be generated using the interface to external generators like PYTHIA or simple parameterizations of transverse momentum and rapidity spectra defined in function libraries.\rThe framework provides a tool to assemble events from different signal generators (event cocktails).\rThe framework provides tools to combine underlying events and signal events at the primary particle level (cocktail) and at the summable digit level (merging).\r“afterburners” are used to introduce particle correlations in a controlled way. An afterburner is a program which changes the momenta of the particles produced by another generator, and thus modifies the multi-particle momentum distributions, as desired.\rThe implementation of this strategy is described below. The results of different Monte Carlo generators for heavy-ion collisions are described in section \13 REF _Ref35869904 \n \h \ 1\143.5.4\15.\rSimulation framework\rThe simulation framework covers the simulation of primary collisions and generation of the emerging particles, the transport of particles through the detector, the simulation of energy depositions (hits) in the detector components, their response in form of so called summable digits, the generation of digits from summable digits with the optional merging of underlying events and the creation of raw data.\rThe \13 HYPERLINK ""\ 1\14AliSimulation\15 class provides a simple user interface to the simulation framework. This section focuses on the simulation framework from the (detector) software developer point of view.\r\ 1\rFigure \13 SEQ "Figure" \*Arabic \142\15: Simulation framework.\rGeneration of Particles\rDifferent generators can be used to produce particles emerging from the collision. The class \13 HYPERLINK ""\ 1\14AliGenerator\15 is the base class defining the virtual interface to the generator programs. The generators are described in more detail in the ALICE PPR Volume 1 and in the next chapter.\rVirtual Monte Carlo\rA class derived from \13 HYPERLINK ""\ 1\14TVirtualMC\15 performs the simulation of particles traversing the detector components. The Virtual Monte Carlo also provides an interface to construct the geometry of detectors. The geometrical modeller TGeo does the task of the geometry description. The concrete implementation of the virtual Monte Carlo application \13 HYPERLINK ""\ 1\14TVirtualMCApplication\15 is \13 HYPERLINK ""\ 1\14AliMC\15. The Monte Carlos used in ALICE are GEANT 3.21, GEANT 4 and FLUKA. More information can be found on the VMC Web page: \13 HYPERLINK ""\ 1\14\15.\rAs explained above, our strategy was to develop a virtual interface to the detector simulation code. We call the interface to the transport code ‘virtual Monte Carlo’. It is implemented via C++ virtual classes and is schematically shown in \13 REF _Ref31272578 \h \ 1\14Figure 3\15. Implementations of those abstract classes are C++ programs or wrapper classes that interface to FORTRAN programs.\r\ 1\rFigure \13 SEQ "Figure" \*Arabic \143\15. Virtual Monte Carlo\rThanks to the virtual Monte Carlo, we have converted all FORTRAN user code developed for GEANT 3 into C++, including the geometry definition and the user scoring routines, StepManager. These have been integrated in the detector classes of the AliRoot framework. The output of the simulation is saved directly with ROOT I/O, simplifying the development of the digitization and reconstruction code in C++.\rModules and Detectors\rA class derived from \13 HYPERLINK ""\ 1\14AliModule\15 describes each module of the ALICE detector. Classes for active modules (i.e. detectors) are not derived directly from \13 HYPERLINK ""\ 1\14AliModule\15 but from its subclass \13 HYPERLINK ""\ 1\14AliDetector\15. These base classes define the interface to the simulation framework via a set of virtual methods.\rConfiguration File (Config.C)\rThe configuration file is a C++ macro that is processed before the simulation starts. It creates and configures the Monte Carlo object, the generator object, the magnetic field map and the detector modules. A detailed description is given below.\rDetector Geometry\rThe virtual Monte Carlo application creates and initializes the geometry of the detector modules by calling the virtual functions CreateMaterials, CreateGeometry, Init and BuildGeometry.\rVertexes and Particles\rIn case the simulated event is intended to be merged with an underlying event, the primary vertex is taken from the file containing the underlying event by using the vertex generator \13 HYPERLINK ""\ 1\14AliVertexGenFile\15. Otherwise, the primary vertex is generated according to the generator settings. Then the particles emerging from the collision are generated and put on the stack (an instance of \13 HYPERLINK ""\ 1\14AliStack\15). The Monte Carlo object performs the transport of particles through the detector. The external decayer \13 HYPERLINK ""\ 1\14AliDecayerPythia\15 usually handles the decay of particles.\rHits and Track References\rThe Monte Carlo simulates the transport of a particle step by step. After each step the virtual method StepManager of the module in which the particle currently is located is called. In this StepManager method, calling AddHit creates the hits in the detector. Optionally also track references (location and momentum of simulated particles at selected places) can be created by calling AddTackReference. AddHit has to be implemented by each detector whereas AddTackReference is already implemented in \13 HYPERLINK ""\ 1\14AliModule\15. The detector class manages the container and the branch for the hits – and for the (summable) digits – via a set of so-called loaders. The relevant data members and methods are fHits, fDigits, ResetHits, ResetSDigits, ResetDigits, MakeBranch and SetTreeAddress.\rFor each detector methods like PreTrack, PostTrack, FinishPrimary, FinishEvent and FinishRun are called during the simulation when the conditions indicated by the method names are fulfilled.\rSummable Digits\rCalling the virtual method Hits2SDigits of a detector creates summable digits. This method loops over all events, creates the summable digits from hits and stores them in the sdigits file(s).\rDigitization and Merging\rDedicated classes derived from \13 HYPERLINK ""\ 1\14AliDigitizer\15 are used for the conversion of summable digits into digits. Since \13 HYPERLINK ""\ 1\14AliDigitizer\15 is a \13 HYPERLINK ""\ 1\14TTask\15, this conversion is done for the current event by the Exec method. Inside this method the summable digits of all input streams have to be added, combined with noise, converted to digital values taking into account possible thresholds, and stored in the digits container.\rAn object of type \13 HYPERLINK ""\ 1\14AliRunDigitizer\15 manages the input streams (more than one in case of merging) as well as the output stream. The methods GetNinputs, GetInputFolderName and GetOutputFolderName return the relevant information. The run digitizer is accessible inside the digitizer via the protected data member fManager. If the flag fRegionOfInterest is set, only detector parts where summable digits from the signal event are present should be digitized. When Monte Carlo labels are assigned to digits, the stream-dependent offset given by the method GetMask is added to the label of the summable digit.\rThe detector specific digitizer object is created in the virtual method CreateDigitizer of the concrete detector class. The run digitizer object is used to construct the detector digitizer. The Init method of each digitizer is called before the loop over the events is started.\rA direct conversion from hits directly to digits can be implemented in the method Hits2Digits of a detector. The loop over the events takes place inside the method. Of course merging is not supported in this case.\rAn example of a simulation script that can be used for simulation of proton-proton collisions is provided below:\rSimulation run\rvoid sim(Int_t nev=100) {\r AliSimulation simulator;\r// Measure the total time spent in the simulation\r TStopwatch timer;\r timer.Start();\r// List of detectors, where both summable digits and digits are provided\r simulator.SetMakeSDigits("TRD TOF PHOS EMCAL HMPID MUON ZDC PMD FMD T0 VZERO");\r// Direct conversion of hits to digits for faster processing (ITS TPC)\r simulator.SetMakeDigitsFromHits("ITS TPC");\r simulator.Run(nev);\r timer.Stop();\r timer.Print();\r}\r\rThe following example shows how one can do event merging:\rEvent merging\rvoid sim(Int_t nev=6) {\r AliSimulation simulator;\r// The underlying events are stored in a separate directory.\r// Three signal events will be merged in turn with each\r// underlying event\r simulator.MergeWith("../backgr/galice.root",3);\r simulator.Run(nev);\r }\r\rRaw Data\rThe digits stored in ROOT containers can be converted into the DATE [\ 2] format that will be the `payload' of the ROOT classes containing the raw data. This is done for the current event in the method Digits2Raw of the detector.\rThe class \13 HYPERLINK ""\ 1\14AliSimulation\15 manages the simulation of raw data. In order to create raw data DDL files, it loops over all events. For each event it creates a directory, changes to this directory and calls the method Digits2Raw of each selected detector. In the Digits2Raw method the DDL files of a detector are created from the digits for the current event.\rFor the conversion of the DDL files to a DATE file the \13 HYPERLINK ""\ 1\14AliSimulation\15 class uses the tool dateStream. To create a raw data file in ROOT format with the DATE output as payload the program alimdc is utilized.\rThe only part that has to be implemented in each detector is the Digits2Raw method of the detectors. In this method one file per DDL has to be created following the conventions for file names and DDL IDs. Each file is a binary file with a DDL data header in the beginning. The DDL data header is implemented in the structure \13 HYPERLINK ""\ 1\14AliRawDataHeader\15. The data member fSize should be set to the total size of the DDL raw data including the size of the header. The attribute bit 0 should be set by calling the method to indicate that the data in this file is valid. The attribute bit 1 can be set to indicate compressed raw data.\rThe detector-specific raw data is stored in the DDL files following the DDL data header. The format of this raw data should be as close as possible to the one that will be delivered by the detector. This includes the order in which the channels will be read out.\rBelow we show an example of raw data creation for all the detectors:\r\rvoid sim(Int_t nev=1) {\r AliSimulation simulator;\r // Create raw data for ALL detectors, rootify it and store in the\r // file raw,root. Do not delete the intermediate files\r simulator.SetWriteRawData("ALL","raw.root",kFALSE);\r simulator.Run(nev);\r }\r\rConfiguration: example of Config.C\rThe example below contains as comments the most important information:\rExample of Config.C\r// Function converting pseudorapidity\r// interval to polar angle interval. It is used to set \r// the limits in the generator\rFloat_t EtaToTheta(Float_t arg){\r return (180./TMath::Pi())*2.*atan(exp(-arg));\r}\r\r// Set Random Number seed using the current time\rTDatime dat;\rstatic UInt_t sseed = dat.Get();\r\rvoid Config()\r{\r gRandom->SetSeed(sseed);\r cout<<"Seed for random number generation= "<<gRandom->GetSeed()<<endl; \r \r // Load GEANT 3 library. It has to be in LD_LIBRARY_PATH\r gSystem->Load("libgeant321");\r\r // Instantiation of the particle transport package. gMC is set internaly\r new TGeant3TGeo("C++ Interface to Geant3");\r\r // Create run loader and set some properties\r AliRunLoader* rl = AliRunLoader::Open("galice.root",\r AliConfig::GetDefaultEventFolderName(),\r "recreate");\r if (!rl) Fatal("Config.C","Can not instatiate the Run Loader");\r rl->SetCompressionLevel(2);\r rl->SetNumberOfEventsPerFile(3);\r\r // Register the run loader in gAlice\r gAlice->SetRunLoader(rl);\r\r // Set external decayer\r LoadPythia();\r TVirtualMCDecayer *decayer = new AliDecayerPythia();\r decayer->SetForceDecay(kAll); // kAll means no specific decay is forced\r decayer->Init();\r\r // Register the external decayer in the transport package\r gMC->SetExternalDecayer(decayer);\r\r // STEERING parameters FOR ALICE SIMULATION\r // Specify event type to be transported through the ALICE setup\r // All positions are in cm, angles in degrees, and P and E in GeV\r // For the details see the GEANT 3 manual\r\r // Switch on/off the physics processes (global)\r // Please consult the file data/galice.cuts for detector\r // specific settings, i.e. DRAY\r gMC->SetProcess("DCAY",1); // Particle decay\r gMC->SetProcess("PAIR",1); // Pair production\r gMC->SetProcess("COMP",1); // Compton scattering\r gMC->SetProcess("PHOT",1); // Photo effect\r gMC->SetProcess("PFIS",0); // Photo fission\r gMC->SetProcess("DRAY",0); // Delta rays\r gMC->SetProcess("ANNI",1); // Positron annihilation\r gMC->SetProcess("BREM",1); // Bremstrahlung\r gMC->SetProcess("MUNU",1); // Muon nuclear interactions\r gMC->SetProcess("CKOV",1); // Cerenkov production\r gMC->SetProcess("HADR",1); // Hadronic interactions\r gMC->SetProcess("LOSS",2); // Energy loss (2=complete fluct.)\r gMC->SetProcess("MULS",1); // Multiple scattering\r gMC->SetProcess("RAYL",1); // Rayleigh scattering\r \r // Set the transport package cuts\r Float_t cut = 1.e-3; // 1MeV cut by default\r Float_t tofmax = 1.e10;\r\r gMC->SetCut("CUTGAM", cut); // Cut for gammas\r gMC->SetCut("CUTELE", cut); // Cut for electrons\r gMC->SetCut("CUTNEU", cut); // Cut for neutral hadrons\r gMC->SetCut("CUTHAD", cut); // Cut for charged hadrons\r gMC->SetCut("CUTMUO", cut); // Cut for muons\r gMC->SetCut("BCUTE", cut); // Cut for electron brems.\r gMC->SetCut("BCUTM", cut); // Cut for muon brems.\r gMC->SetCut("DCUTE", cut); // Cut for electron delta-rays\r gMC->SetCut("DCUTM", cut); // Cut for muon delta-rays\r gMC->SetCut("PPCUTM", cut); // Cut for e+e- pairs by muons\r gMC->SetCut("TOFMAX", tofmax); // Time of flight cut\r \r // Set up the particle generation\r\r // AliGenCocktail permits to combine several different generators\r AliGenCocktail *gener = new AliGenCocktail();\r\r // The phi range is always inside 0-360\r gener->SetPhiRange(0, 360);\r\r // Set pseudorapidity range from -8 to 8.\r Float_t thmin = EtaToTheta(8); // theta min. <–-> eta max\r Float_t thmax = EtaToTheta(-8); // theta max. <–-> eta min \r gener->SetThetaRange(thmin,thmax);\r\r gener->SetOrigin(0, 0, 0); // vertex position\r gener->SetSigma(0, 0, 5.3); // Sigma in (X,Y,Z) (cm) on IP position\r gener->SetCutVertexZ(1.); // Truncate at 1 sigma\r gener->SetVertexSmear(kPerEvent); \r\r // First cocktail component: 100 ``background'' particles \r AliGenHIJINGpara *hijingparam = new AliGenHIJINGpara(100);\r hijingparam->SetMomentumRange(0.2, 999);\r gener->AddGenerator(hijingparam,"HIJING PARAM",1);\r\r // Second cocktail component: one gamma in PHOS direction\r AliGenBox *genbox = new AliGenBox(1);\r genbox->SetMomentumRange(10,11.);\r genbox->SetPhiRange(270.5,270.7);\r genbox->SetThetaRange(90.5,90.7);\r genbox->SetPart(22);\r gener->AddGenerator(genbox,"GENBOX GAMMA for PHOS",1);\r\r gener->Init(); // Initialization of the coctail generator\r\r // Field (the last parameter is 1 => L3 0.4 T)\r AliMagFMaps* field = new AliMagFMaps("Maps","Maps", 2, 1., 10., 1);\r gAlice->SetField(field); \r\r // Make sure the current ROOT directory is in galice.root \r rl->CdGAFile();\r\r // Build the setup and set some detector parameters\r\r // ALICE BODY parameters. BODY is always present\r AliBODY *BODY = new AliBODY("BODY", "ALICE envelop");\r\r // Start with Magnet since detector layouts may be depending\r // on the selected Magnet dimensions\r AliMAG *MAG = new AliMAG("MAG", "Magnet");\r\r AliABSO *ABSO = new AliABSOv0("ABSO", "Muon Absorber"); // Absorber\r\r AliDIPO *DIPO = new AliDIPOv2("DIPO", "Dipole version 2"); // Dipole magnet\r\r AliHALL *HALL = new AliHALL("HALL", "ALICE Hall"); // Hall\r \r AliFRAMEv2 *FRAME = new AliFRAMEv2("FRAME", "Space Frame"); // Space frame\r\r AliSHIL *SHIL = new AliSHILv2("SHIL", "Shielding Version 2"); // Shielding\r\r AliPIPE *PIPE = new AliPIPEv0("PIPE", "Beam Pipe"); // Beam pipe\r \r // ITS parameters\r AliITSvPPRasymmFMD *ITS = new AliITSvPPRasymmFMD("ITS",\r "ITS PPR detailed version with asymmetric services");\r ITS->SetMinorVersion(2); // don't change it if you're not an ITS developer\r ITS->SetReadDet(kFALSE); // don't change it if you're not an ITS developer\r ITS->SetThicknessDet1(200.); // detector thickness on layer 1:[100,300] mkm\r ITS->SetThicknessDet2(200.); // detector thickness on layer 2:[100,300] mkm\r ITS->SetThicknessChip1(150.); // chip thickness on layer 1: [150,300] mkm\r ITS->SetThicknessChip2(150.); // chip thickness on layer 2: [150,300]\r ITS->SetRails(0); // 1 –> rails in ; 0 –> rails out\r ITS->SetCoolingFluid(1); // 1 –> water ; 0 –> freon\r ITS->SetEUCLID(0); // no output for the EUCLID CAD system \r\r \r AliTPC *TPC = new AliTPCv2("TPC", "Default"); // TPC\r\r AliTOF *TOF = new AliTOFv5T0("TOF", "normal TOF"); // TOF\r\r AliHMPID *HMPID = new AliHMPIDv1("HMPID", "normal HMPID"); // HMPID\r\r AliZDC *ZDC = new AliZDCv2("ZDC", "normal ZDC"); // ZDC\r\r AliTRD *TRD = new AliTRDv1("TRD", "TRD slow simulator"); // TRD\r\r AliFMD *FMD = new AliFMDv1("FMD", "normal FMD"); // FMD\r\r AliMUON *MUON = new AliMUONv1("MUON", "default"); // MUON\r\r AliPHOS *PHOS = new AliPHOSv1("PHOS", "IHEP"); // PHOS\r\r AliPMD *PMD = new AliPMDv1("PMD", "normal PMD"); // PMD\r\r AliT0 *T0 = new AliT0v1("T0", "T0 Detector"); // T0\r\r // EMCAL\r AliEMCAL *EMCAL = new AliEMCALv2("EMCAL", "SHISH_77_TRD1_2X2_FINAL_110DEG");\r\r AliVZERO *VZERO = new AliVZEROv7("VZERO", "normal VZERO"); // VZERO\r}\r\rvoid LoadPythia()\r{\r gSystem->Load(""); // Parton density functions\r gSystem->Load(""); // TGenerator interface\r gSystem->Load(""); // Pythia\r gSystem->Load(""); // ALICE specific implementations\r}\r\r\rEvent generation\r\ 1\rFigure \13 SEQ "Figure" \*Arabic \144\15: \13 HYPERLINK ""\ 1\14AliGenerator\15 is the base class, which has the responsibility to generate the primary particles of an event. Some realizations of this class do not generate the particles themselves but delegate the task to an external generator like PYTHIA through the \13 HYPERLINK ""\ 1\14TGenerator\15 interface.\rParametrized generation\rThe event generation based on parametrization can be used to produce signal-free final states. It avoids the dependences on a specific model, and is efficient and flexible. It can be used to study the track reconstruction efficiency as a function of the initial multiplicity and occupation.\rAliGenHIJINGparam [\ 2] is an example of an internal AliRoot generator, based on parametrized pseudorapidity density and transverse momentum distributions of charged and neutral pions and kaons. The pseudorapidity distribution was obtained from a HIJING simulation of central Pb–Pb collisions and scaled to a charged-particle multiplicity of 8000 in the pseudo rapidity interval |\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0hð|\0<\00\0.\05\0.\0 \0N\0o\0t\0e\0 \0t\0h\0a\0t\0 \0t\0h\0i\0s\0 \0i\0s\0 \0a\0b\0o\0u\0t\0 \01\00\0%\0 \0h\0i\0g\0h\0e\0r\0 \0t\0h\0a\0n\0 \0t\0h\0e\0 \0c\0o\0r\0r\0e\0s\0p\0o\0n\0d\0i\0n\0g\0 \0v\0a\0l\0u\0e\0 \0f\0o\0r\0 \0a\0 \0r\0a\0p\0i\0d\0i\0t\0y\0 \0d\0e\0n\0s\0i\0t\0y\0 \0w\0i\0t\0h\0 \0a\0n\0 \0a\0v\0e\0r\0a\0g\0e\0 \0\13\0 \0E\0M\0B\0E\0D\0 \0 \0\14\0\ 1\0\15\0\13\0 \0E\0M\0B\0E\0D\0 \0 \0\14\0\ 1\0\15\0o\0f\0 \08\0.\00\00\00\0 \0i\0n\0 \0t\0h\0e\0 \0i\0n\0t\0e\0r\0v\0a\0l\0 \0|\0y\0|\0<\00\0.\05\0.\0\r\0T\0h\0e\0 \0t\0r\0a\0n\0s\0v\0e\0r\0s\0e\0-\0m\0o\0m\0e\0n\0t\0u\0m\0 \0d\0i\0s\0t\0r\0i\0b\0u\0t\0i\0o\0n\0 \0i\0s\0 \0p\0a\0r\0a\0m\0e\0t\0r\0i\0z\0e\0d\0 \0f\0r\0o\0m\0 \0t\0h\0e\0 \0m\0e\0a\0s\0u\0r\0e\0d\0 \0C\0D\0F\0 \0p\0i\0o\0n\0 \0p\0t\0-\0d\0i\0s\0t\0ribution at \13 EMBED \14\ 1\15. The corresponding kaon pt-distribution was obtained from the pion distribution by mt-scaling. \13 PAGEREF _RefE24 \h \ 1\14178\15For the details of these parametrizations see [\13 PAGEREF _RefE24 \h \ 1\14178\15].\rIn many cases, the expected transverse momentum and rapidity distributions of particles are known. In other cases, the effect of variations in these distributions must be investigated. In both situations, it is appropriate to use generators that produce primary particles and their decays sampling from parametrized spectra. To meet the different physics requirements in a modular way, the parametrizations are stored in independent function libraries wrapped into classes that can be plugged into the generator. This is schematically illustrated in \13 REF _Ref31420580 \h \ 1\14Figure 5\15 where four different generator libraries can be loaded via the abstract generator interface.\rIt is customary in heavy-ion event generation to superimpose different signals on an event in order to tune the reconstruction algorithms. This is possible in AliRoot via the so-called cocktail generator (see \13 REF _Ref31420932 \h \ 1\14Figure 6\15). This creates events from user-defined particle cocktails by choosing as ingredients a list of particle generators.\r\ 1\rFigure \13 SEQ "Figure" \*Arabic \145\15: \13 HYPERLINK ""\ 1\14AliGenParam\15 is a realization that generates particles using parameterized pt and pseudo-rapidity distributions. Instead of coding a fixed number of parameterizations directly into the class implementations, user defined parameterization libraries (AliGenLib) can be connected at run time, providing maximum flexibility.\rAn example of \13 HYPERLINK ""\ 1\14AliGenParam\15 usage is presented below:\r\r// Example for J/psi Production from Parameterization \r// using default library (AliMUONlib)\rAliGenParam *gener = new AliGenParam(ntracks, AliGenMUONlib::kUpsilon);\rgener->SetMomentumRange(0,999); // Wide cut on the Upsilon momentum\rgener->SetPtRange(0,999); // Wide cut on Pt\rgener->SetPhiRange(0. , 360.); // Full azimutal range\rgener->SetYRange(2.5,4); // In the acceptance of the MUON arm\rgener->SetCutOnChild(1); // Enable cuts on Upsilon decay products\rgener->SetChildThetaRange(2,9); // Theta range for the decay products\rgener->SetOrigin(0,0,0); // Vertex position\rgener->SetSigma(0,0,5.3); // Sigma in (X,Y,Z) (cm) on IP position\rgener->SetForceDecay(kDiMuon); // Upsilon->mu+ mu- decay\rgener->SetTrackingFlag(0); // No particle transport\rgener->Init();\r\rTo facilitate the usage of different generators, we have developed an abstract generator interface called \13 HYPERLINK ""\ 1\14AliGenerator\15, see \13 REF _Ref31421362 \h \ 1\14Figure 4\15. The objective is to provide the user with an easy and coherent way to study a variety of physics signals as well as a full set of tools for testing and background studies. This interface allows the study of full events, signal processes and a mixture of both, i.e. cocktail events (see example below).\rSeveral event generators are available via the abstract ROOT class that implements the generic generator interface, \13 HYPERLINK ""\ 1\14TGenerator\15. By means of implementations of this abstract base class, we wrap FORTRAN Monte Carlo codes like PYTHIA, HERWIG, and HIJING that are thus accessible from the AliRoot classes. In particular the interface to PYTHIA includes the use of nuclear structure functions of LHAPDF.\rPythia6\rPythia is used for simulation of proton-proton interactions and for generation of jets in case of event merging. An example of minimum bias Pythia events is presented below:\r\rAliGenPythia *gener = new AliGenPythia(-1); \rgener->SetMomentumRange(0,999999);\rgener->SetThetaRange(0., 180.);\rgener->SetYRange(-12,12);\rgener->SetPtRange(0,1000);\rgener->SetProcess(kPyMb); // Min. bias events\rgener->SetEnergyCMS(14000.); // LHC energy\rgener->SetOrigin(0, 0, 0); // Vertex position\rgener->SetSigma(0, 0, 5.3); // Sigma in (X,Y,Z) (cm) on IP position\rgener->SetCutVertexZ(1.); // Truncate at 1 sigma\rgener->SetVertexSmear(kPerEvent);// Smear per event\rgener->SetTrackingFlag(1); // Particle transport\rgener->Init()\rHIJING\rHIJING (Heavy-Ion Jet Interaction Generator) combines a QCD-inspired model of jet production [\13 PAGEREF _RefE8 \h \ 1\14178\15] using the Lund model [\ 2] for jet fragmentation. Hard or semi-hard parton scatterings with transverse momenta of a few GeV are expected to dominate high-energy heavy-ion collisions. The HIJING model has been developed with special emphasis on the role of mini jets in p–p, p–A and A–A reactions at collider energies.\rDetailed systematic comparisons of HIJING results with a wide range of data demonstrate a qualitative understanding of the interplay between soft string dynamics and hard QCD interactions. In particular, HIJING reproduces many inclusive spectra, two-particle correlations, as well as the observed flavour and multiplicity dependence of the average transverse momentum.\rThe Lund FRITIOF [\ 2] model and the Dual Parton Model [\ 2] (DPM) have guided the formulation of HIJING for soft nucleus–nucleus reactions at intermediate energies: \13 EMBED \14\ 1\15. The hadronic-collision model has been inspired by the successful implementation of perturbative QCD processes in PYTHIA [\13 PAGEREF _RefE7 \h \ 1\14178\15]. Binary scattering with Glauber geometry for multiple interactions are used to extrapolate to p–A and A–A collisions.\rTwo important features of HIJING are jet quenching and nuclear shadowing. Under jet quenching, we understand the energy loss of partons in nuclear matter. It is responsible for an increase of the particle multiplicity at central rapidities. Jet quenching is taken into account by an expected energy loss of partons traversing dense matter. A simple colour configuration is assumed for the multi-jet system and the Lund fragmentation model is used for the hadroniation. HIJING does not simulate secondary interactions.\rShadowing describes the modification of the free nucleon parton density in the nucleus. At low-momentum fractions, x, observed by collisions at the LHC, shadowing results in a decrease of multiplicity. Parton shadowing is taken into account using a parameterization of the modification.\rHere is an example of event generation with HIJING:\r\rAliGenHijing *gener = new AliGenHijing(-1);\rgener->SetEnergyCMS(5500.); // center of mass energy \rgener->SetReferenceFrame("CMS"); // reference frame\rgener->SetProjectile("A", 208, 82); // projectile\rgener->SetTarget ("A", 208, 82); // projectile\rgener->KeepFullEvent(); // HIJING will keep the full parent child chain\rgener->SetJetQuenching(1); // enable jet quenching\rgener->SetShadowing(1); // enable shadowing\rgener->SetDecaysOff(1); // neutral pion and heavy particle decays switched off\rgener->SetSpectators(0); // Don't track spectators\rgener->SetSelectAll(0); // kinematic selection\rgener->SetImpactParameterRange(0., 5.); // Impact parameter range (fm)\rgener->Init()\r\rAdditional universal generators\rThe following universal generators are available in AliRoot:\rAliGenDPMjet: this is an implementation of the dual parton model [\13 PAGEREF _RefE22 \h \ 1\14178\15];\rAliGenIsajet: a Monte Carlo event generator for p–p, \13 EMBED \14\ 1\15–p, and \13 EMBED \14\ 1\15 [\ 2];\rAliGenHerwig: Monte Carlo package for simulating Hadron Emission Reactions With Interfering Gluons [\ 2].\rAn example of HERWIG configuration in the Config.C is shown below:\r\rAliGenHerwig *gener = new AliGenHerwig(-1);\r// final state kinematic cuts\rgener->SetMomentumRange(0,7000);\rgener->SetPhiRange(0. ,360.);\rgener->SetThetaRange(0., 180.);\rgener->SetYRange(-10,10);\rgener->SetPtRange(0,7000);\r// vertex position and smearing \rgener->SetOrigin(0,0,0); // vertex position\rgener->SetVertexSmear(kPerEvent);\rgener->SetSigma(0,0,5.6); // Sigma in (X,Y,Z) (cm) on IP position\r// Beam momenta\rgener->SetBeamMomenta(7000,7000);\r// Beams\rgener->SetProjectile("P");\rgener->SetTarget("P");\r// Structure function\rgener->SetStrucFunc(kGRVHO);\r// Hard scatering\rgener->SetPtHardMin(200);\rgener->SetPtRMS(20);\r// Min bias\rgener->SetProcess(8000);\r\rGenerators for specific studies\rMevSim\rMevSim [\ 2] was developed for the STAR experiment to quickly produce a large number of A–A collisions for some specific needs; initially for HBT studies and for testing of reconstruction and analysis software. However, since the user is able to generate specific signals, it was extended to flow and event-by-event fluctuation analysis.\rMevSim generates particle spectra according to a momentum model chosen by the user. The main input parameters are: types and numbers of generated particles, momentum-distribution model, reaction-plane and azimuthal-anisotropy coefficients, multiplicity fluctuation, number of generated events, etc. The momentum models include factorized pt and rapidity distributions, non-expanding and expanding thermal sources, arbitrary distributions in y and pt and others. The reaction plane and azimuthal anisotropy is defined by the Fourier coefficients (maximum of six) including directed and elliptical flow. Resonance production can also be introduced.\rMevSim was originally written in FORTRAN. It was later integrated into AliRoot. A complete description of the AliRoot implementation of MevSim can be found on the web page (\13 HYPERLINK ""\ 1\14\15).\rGeVSim\rGeVSim is based on the MevSim [\13 PAGEREF _RefE30 \h \ 1\14179\15] event generator developed for the STAR experiment. \rGeVSim [\ 2] is a fast and easy-to-use Monte Carlo event generator implemented in AliRoot. It can provide events of similar type configurable by the user according to the specific need of a simulation project, in particular, that of flow and event-by-event fluctuation studies. It was developed to facilitate detector performance studies and for the test of algorithms. GeVSim can also be used to generate signal-free events to be processed by afterburners, for example the HBT processor.\r\13 PAGEREF _RefE30 \h \ 1\14179\15\rGeVSim generates a list of particles by randomly sampling a distribution function. The user explicitly defines the parameters of single-particle spectra and their event-by-event fluctuations. Single-particle transverse-momentum and rapidity spectra can be either selected from a menu of four predefined distributions, the same as in MevSim, or provided by user.\rFlow can be easily introduced into simulated events. The parameters of the flow are defined separately for each particle type and can be either set to a constant value or parameterized as a function of transverse momentum and rapidity. Two parameterizations of elliptic flow based on results obtained by RHIC experiments are provided.\rGeVSim also has extended possibilities for simulating of event-by-event fluctuations. The model allows fluctuations following an arbitrary analytically defined distribution in addition to the Gaussian distribution provided by MevSim. It is also possible to systematically alter a given parameter to scan the parameter space in one run. This feature is useful when analyzing performance with respect to, for example, multiplicity or event-plane angle.\rThe current status and further development of GeVSim code and documentation can be found in [\ 2].\rHBT processor\rCorrelation functions constructed with the data produced by MEVSIM or any other event generator are normally flat in the region of small relative momenta. The HBT-processor afterburner introduces two particle correlations into the set of generated particles. It shifts the momentum of each particle so that the correlation function of a selected model is reproduced. The imposed correlation effects, due to Quantum Statistics (QS) and Coulomb Final State Interactions (FSI), do not affect the single-particle distributions and multiplicities. The event structures before and after the HBT processor are identical. Thus, the event reconstruction procedure with and without correlations is also identical. However, the track reconstruction efficiency, momentum resolution and particle identification are in general not identical, since correlated particles have a special topology at small relative velocities. We can thus verify the influence of various experimental factors on the correlation functions.\rThe method proposed by L. Ray and G.W. Hoffmann [\ 2] is based on random shifts of the particle three-momentum within a confined range. After each shift, a comparison is made with correlation functions resulting from the assumed model of the space–time distribution and with the single-particle spectra that should remain unchanged. The shift is kept if the \13 EMBED \14\ 1\15-test shows better agreement. The process is iterated until satisfactory agreement is achieved. In order to construct the correlation function, a reference sample is made by mixing particles from consecutive events. Such a method has an important impact on the simulations, when at least two events must be processed simultaneously.\rSome specific features of this approach are important for practical use:\rThe HBT processor can simultaneously generate correlations of up to two particle types (e.g. positive and negative pions). Correlations of other particles can be added subsequently.\rThe form of the correlation function has to be parameterized analytically. One and three dimensional parameterizations are possible.\rA static source is usually assumed. Dynamical effects, related to expansion or flow, can be simulated in a stepwise form by repeating simulations for different values of the space–time parameters associated with different kinematic intervals.\rCoulomb effects may be introduced by one of three approaches: Gamow factor, experimentally modified Gamow correction and integrated Coulomb wave functions for discrete values of the source radii.\rStrong interactions are not implemented.\rThe detailed description of the HBT processor can be found elsewhere [\ 2].\rFlow afterburner\rAzimuthal anisotropies, especially elliptic flow, carry unique information about collective phenomena and consequently are important for the study of heavy-ion collisions. Additional information can be obtained studying different heavy-ion observables, especially jets, relative to the event plane. Therefore it is necessary to evaluate the capability of ALICE to reconstruct the event plane and study elliptic flow.\rSince a well understood microscopic description of the flow effect is not available so far, it cannot be correctly simulated by microscopic event generators. Therefore, in order to generate events with flow, the user has to use event generators based on macroscopic models, like GeVSim [\13 PAGEREF _RefE31 \h \ 1\14179\15] or an afterburner which can generate flow on top of events generated by event generators based on the microscopic description of the interaction. In the AliRoot framework such a flow afterburner is implemented.\rThe algorithm to apply azimuthal correlation consists in shifting the azimuthal coordinates of the particles. The transformation is given by [\ 2]:\r\13 EMBED Equation.DSMT4 \14\ 1\15\rwhere \13 EMBED Equation.DSMT4 \14\ 1\15 is the flow coefficient to be obtained, \13 EMBED Equation.DSMT4 \14\ 1\15 is the harmonic number and \13 EMBED Equation.DSMT4 \14\ 1\15 is the event-plane angle. Note that the algorithm is deterministic and does not contain any random number generation.\rThe value of the flow coefficient can either be constant or parameterized as a function of transverse momentum and rapidity. Two parameterizations of elliptic flow are provided as in GeVSim.\r\rAliGenGeVSim* gener = new AliGenGeVSim(0);\r\rmult = 2000; // Mult is the number of charged particles in |eta| < 0.5\rvn = 0.01; // Vn\r\rFloat_t sigma_eta = 2.75; // Sigma of the Gaussian dN/dEta\rFloat_t etamax = 7.00; // Maximum eta\r\r// Scale from multiplicity in |eta| < 0.5 to |eta| < |etamax| \rFloat_t mm = mult * (TMath::Erf(etamax/sigma_eta/sqrt(2.)) /\rTMath::Erf(0.5/sigma_eta/sqrt(2.))); \r\r// Scale from charged to total multiplicity\rmm *= 1.587;\r\r// Define particles\r\r// 78% Pions (26% pi+, 26% pi-, 26% p0) T = 250 MeV\rAliGeVSimParticle *pp = \rnew AliGeVSimParticle(kPiPlus, 1, 0.26 * mm, 0.25, sigma_eta) ;\rAliGeVSimParticle *pm = \rnew AliGeVSimParticle(kPiMinus, 1, 0.26 * mm, 0.25, sigma_eta) ;\rAliGeVSimParticle *p0 = \rnew AliGeVSimParticle(kPi0, 1, 0.26 * mm, 0.25, sigma_eta) ;\r\r// 12% Kaons (3% K0short, 3% K0long, 3% K+, 3% K-) T = 300 MeV\rAliGeVSimParticle *ks = \rnew AliGeVSimParticle(kK0Short, 1, 0.03 * mm, 0.30, sigma_eta) ;\rAliGeVSimParticle *kl =\rnew AliGeVSimParticle(kK0Long, 1, 0.03 * mm, 0.30, sigma_eta) ;\rAliGeVSimParticle *kp =\rnew AliGeVSimParticle(kKPlus, 1, 0.03 * mm, 0.30, sigma_eta) ;\rAliGeVSimParticle *km =\rnew AliGeVSimParticle(kKMinus, 1, 0.03 * mm, 0.30, sigma_eta) ;\r\r// 10% Protons / Neutrons (5% Protons, 5% Neutrons) T = 250 MeV\rAliGeVSimParticle *pr =\rnew AliGeVSimParticle(kProton, 1, 0.05 * mm, 0.25, sigma_eta) ;\rAliGeVSimParticle *ne =\rnew AliGeVSimParticle(kNeutron, 1, 0.05 * mm, 0.25, sigma_eta) ;\r\r// Set Elliptic Flow properties \r\rFloat_t pTsaturation = 2. ;\r\rpp->SetEllipticParam(vn,pTsaturation,0.) ;\rpm->SetEllipticParam(vn,pTsaturation,0.) ;\rp0->SetEllipticParam(vn,pTsaturation,0.) ;\rpr->SetEllipticParam(vn,pTsaturation,0.) ;\rne->SetEllipticParam(vn,pTsaturation,0.) ;\rks->SetEllipticParam(vn,pTsaturation,0.) ;\rkl->SetEllipticParam(vn,pTsaturation,0.) ;\rkp->SetEllipticParam(vn,pTsaturation,0.) ;\rkm->SetEllipticParam(vn,pTsaturation,0.) ;\r\r// Set Direct Flow properties \r\rpp->SetDirectedParam(vn,1.0,0.) ;\rpm->SetDirectedParam(vn,1.0,0.) ;\rp0->SetDirectedParam(vn,1.0,0.) ;\rpr->SetDirectedParam(vn,1.0,0.) ;\rne->SetDirectedParam(vn,1.0,0.) ;\rks->SetDirectedParam(vn,1.0,0.) ;\rkl->SetDirectedParam(vn,1.0,0.) ;\rkp->SetDirectedParam(vn,1.0,0.) ;\rkm->SetDirectedParam(vn,1.0,0.) ;\r\r// Add particles to the list\r\rgener->AddParticleType(pp) ;\rgener->AddParticleType(pm) ;\rgener->AddParticleType(p0) ;\rgener->AddParticleType(pr) ;\rgener->AddParticleType(ne) ;\rgener->AddParticleType(ks) ;\rgener->AddParticleType(kl) ;\rgener->AddParticleType(kp) ;\rgener->AddParticleType(km) ;\r\r// Random Ev.Plane\r\rTF1 *rpa = new TF1("gevsimPsiRndm","1", 0, 360);\r\rgener->SetPtRange(0., 9.) ; // Used for bin size in numerical integration\rgener->SetPhiRange(0, 360);\r\rgener->SetOrigin(0, 0, 0); // vertex position\rgener->SetSigma(0, 0, 5.3); // Sigma in (X,Y,Z) (cm) on IP position\rgener->SetCutVertexZ(1.); // Truncate at 1 sigma\rgener->SetVertexSmear(kPerEvent); \rgener->SetTrackingFlag(1);\rgener->Init();\r\rGenerator for e+e- pairs in Pb–Pb collisions\rIn addition to strong interactions of heavy ions in central and peripheral collisions, ultra-peripheral collisions of ions give rise to coherent, mainly electromagnetic interactions among which the dominant process is the (multiple) \13 EMBED Equation.DSMT4 \14\ 1\15-pair production [\ 2]:\r\13 EMBED Equation.DSMT4 \14\ 1\15\rwhere \13 EMBED \14\ 1\15 is the pair multiplicity. Most electron–positron pairs are produced into the very forward direction escaping the experiment. However, for Pb–Pb collisions at the LHC the cross-section of this process, about \13 EMBED \14\ 1\15230 kb, is enormous. A sizable fraction of pairs produced with large-momentum transfer can contribute to the hit rate in the forward detectors increasing the occupancy or trigger rate. In order to study this effect, an event generator for \13 EMBED \14\ 1\15-pair production has been implemented in the AliRoot framework [\ 2]. The class TEpEmGen is a realisation of the TGenerator interface for external generators and wraps the FORTRAN code used to calculate the differential cross-section. AliGenEpEmv1 derives from AliGenerator and uses the external generator to put the pairs on the AliRoot particle stack.\rCombination of generators: AliGenCocktail\rDifferent generators can be combined together so that each one adds the particles it has generated to the event stack via the \13 HYPERLINK ""\ 1\14AliGenCocktail\15 class.\r\ 1\rFigure \13 SEQ "Figure" \*Arabic \146\15: The \13 HYPERLINK ""\ 1\14AliGenCocktail\15 generator is a realization of \13 HYPERLINK ""\ 1\14AliGenerator\15\13 HYPERLINK ""\ 1\14 \15which does not generate particles itself but delegates this task to a list of objects of type \13 HYPERLINK ""\ 1\14AliGenerator\15\13 HYPERLINK ""\ 1\14 \15that can be connected as entries (\13 HYPERLINK ""\ 1\14AliGenCocktailEntry\15) at run-time. In this way, different physics channels can be combined in one event.\rHere is an example of cocktail, used for studies in the TRD detector:\r\r// The cocktail generator\rAliGenCocktail *gener = new AliGenCocktail();\r\r// Phi meson (10 particles)\rAliGenParam *phi = \rnew AliGenParam(10,new AliGenMUONlib(),AliGenMUONlib::kPhi,"Vogt PbPb");\rphi->SetPtRange(0, 100);\rphi->SetYRange(-1., +1.);\rphi->SetForceDecay(kDiElectron);\r\r// Omega meson (10 particles)\rAliGenParam *omega = \rnew AliGenParam(10,new AliGenMUONlib(),AliGenMUONlib::kOmega,"Vogt PbPb");\romega->SetPtRange(0, 100);\romega->SetYRange(-1., +1.);\romega->SetForceDecay(kDiElectron);\r\r// J/psi \rAliGenParam *jpsi = new AliGenParam(10,new AliGenMUONlib(),\rAliGenMUONlib::kJpsiFamily,"Vogt PbPb");\rjpsi->SetPtRange(0, 100);\rjpsi->SetYRange(-1., +1.);\rjpsi->SetForceDecay(kDiElectron);\r\r// Upsilon family\rAliGenParam *ups = new AliGenParam(10,new AliGenMUONlib(),\rAliGenMUONlib::kUpsilonFamily,"Vogt PbPb");\rups->SetPtRange(0, 100);\rups->SetYRange(-1., +1.);\rups->SetForceDecay(kDiElectron);\r \r// Open charm particles\rAliGenParam *charm = new AliGenParam(10,new AliGenMUONlib(), \rAliGenMUONlib::kCharm,"central");\r charm->SetPtRange(0, 100);\rcharm->SetYRange(-1.5, +1.5);\rcharm->SetForceDecay(kSemiElectronic);\r\r// Beauty particles: semi-electronic decays\rAliGenParam *beauty = new AliGenParam(10,new AliGenMUONlib(), \rAliGenMUONlib::kBeauty,"central");\rbeauty->SetPtRange(0, 100);\rbeauty->SetYRange(-1.5, +1.5);\rbeauty->SetForceDecay(kSemiElectronic);\r\r// Beauty particles to J/psi ee\rAliGenParam *beautyJ = new AliGenParam(10, new AliGenMUONlib(), \rAliGenMUONlib::kBeauty,"central");\rbeautyJ->SetPtRange(0, 100);\rbeautyJ->SetYRange(-1.5, +1.5);\rbeautyJ->SetForceDecay(kBJpsiDiElectron);\r\r// Adding all the components of the cocktail\rgener->AddGenerator(phi,"Phi",1);\rgener->AddGenerator(omega,"Omega",1);\rgener->AddGenerator(jpsi,"J/psi",1);\rgener->AddGenerator(ups,"Upsilon",1);\rgener->AddGenerator(charm,"Charm",1);\rgener->AddGenerator(beauty,"Beauty",1);\rgener->AddGenerator(beautyJ,"J/Psi from Beauty",1);\r\r// Settings, common for all components\rgener->SetOrigin(0, 0, 0); // vertex position\rgener->SetSigma(0, 0, 5.3); // Sigma in (X,Y,Z) (cm) on IP position\rgener->SetCutVertexZ(1.); // Truncate at 1 sigma\rgener->SetVertexSmear(kPerEvent); \rgener->SetTrackingFlag(1);\rgener->Init();\r\rParticle transport\rTGeo essential information\rA detailed description of the ROOT geometry package is available in the ROOT User Guide [\ 2]. Several examples can be found in $ROOTSYS/tutorials, among them assembly.C, csgdemo.C, geodemo.C, nucleus.C, rootgeom.C, etc. Here we show a simple usage for export/import of the ALICE geometry and for check for overlaps and extrusions:\r\raliroot\r root [0] gAlice->Init()\r root [1] gGeoManager->Export("geometry.root")\r root [2] .q\r aliroot\r root [0] TGeoManager::Import("geometry.root")\r root [1] gGeoManager->CheckOverlaps()\r root [2] gGeoManager->PrintOverlaps()\r root [3] new TBrowser\r # Now you can navigate in Geometry->Illegal overlaps\r # and draw each overlap (double click on it)\rVisualization\rBelow we show an example of VZERO visualization using the ROOT geometry package:\r\raliroot\r root [0] gAlice->Init()\r root [1] TGeoVolume *top = gGeoManager->GetMasterVolume()\r root [2] Int_t nd = top->GetNdaughters()\r root [3] for (Int_t i=0; i<nd; i++) top->GetNode(i)->GetVolume()->InvisibleAll()\r root [4] TGeoVolume *v0ri = gGeoManager->GetVolume("V0RI")\r root [5] TGeoVolume *v0le = gGeoManager->GetVolume("V0LE")\r root [6] v0ri->SetVisibility(kTRUE);\r root [7] v0ri->VisibleDaughters(kTRUE);\r root [8] v0le->SetVisibility(kTRUE);\r root [9] v0le->VisibleDaughters(kTRUE);\r root [10] top->Draw();\rParticle decays\rWe use Pythia to generate one-particle decays during the transport. The default decay channels can be seen in the following way:\r\raliroot\r root [0] AliPythia * py = AliPythia::Instance()\r root [1] py->Pylist(12); >> decay.list\r\rThe file decay.list will contain the list of particles decays available in Pythia. Now, if we want to force the decay \13 EMBED \14\ 1\15, the following lines should be included in the Config.C before we register the decayer:\r\rAliPythia * py = AliPythia::Instance();\r py->SetMDME(1059,1,0);\r py->SetMDME(1060,1,0);\r py->SetMDME(1061,1,0);\r\rwhere 1059,1060 and 1061 are the indexes of the decay channel (from decay.list above) we want to switch off.\rExamples\rFast simulation\rThis example is taken from the macro $ALICE_ROOT/FASTSIM/fastGen.C. It shows how one can create a kinematics tree which later can be used as input for the particle transport. A simple selection of events with high multiplicity is implemented. \r\rAliGenerator* CreateGenerator();\r\rvoid fastGen(Int_t nev = 1, char* filename = "galice.root")\r{\r// Runloader\r \r AliRunLoader* rl = AliRunLoader::Open("galice.root","FASTRUN","recreate");\r \r rl->SetCompressionLevel(2);\r rl->SetNumberOfEventsPerFile(nev);\r rl->LoadKinematics("RECREATE");\r rl->MakeTree("E");\r gAlice->SetRunLoader(rl);\r\r// Create stack\r rl->MakeStack();\r AliStack* stack = rl->Stack();\r \r// Header\r AliHeader* header = rl->GetHeader();\r//\r// Create and Initialize Generator\r AliGenerator *gener = CreateGenerator();\r gener->Init();\r gener->SetStack(stack);\r \r//\r// Event Loop\r//\r Int_t iev;\r \r for (iev = 0; iev < nev; iev++) {\r\r printf("\n \n Event number %d \n \n", iev);\r \r// Initialize event\r header->Reset(0,iev);\r rl->SetEventNumber(iev);\r stack->Reset();\r rl->MakeTree("K");\r// stack->ConnectTree();\r \r// Generate event\r gener->Generate();\r// Analysis\r Int_t npart = stack->GetNprimary();\r printf("Analyse %d Particles\n", npart);\r for (Int_t part=0; part<npart; part++) {\r TParticle *MPart = stack->Particle(part);\r Int_t mpart = MPart->GetPdgCode();\r printf("Particle %d\n", mpart);\r }\r \r// Finish event\r header->SetNprimary(stack->GetNprimary());\r header->SetNtrack(stack->GetNtrack()); \r// I/O\r// \r stack->FinishEvent();\r header->SetStack(stack);\r rl->TreeE()->Fill();\r rl->WriteKinematics("OVERWRITE");\r\r } // event loop\r//\r// Termination\r// Generator\r gener->FinishRun();\r// Write file\r rl->WriteHeader("OVERWRITE");\r gener->Write();\r rl->Write();\r \r}\r\r\rAliGenerator* CreateGenerator()\r{\r gSystem->Load(""); \r gSystem->Load(""); \r gSystem->Load(""); \r gSystem->Load("");\r gener = new AliGenPythia(1);\r\r// vertex position and smearing \r gener->SetVertexSmear(kPerEvent);\r// structure function\r gener->SetStrucFunc();\r//charm, beauty, charm_unforced, beauty_unforced, jpsi, jpsi_chi, mb\r gener->SetProcess(kPyJets);\r// Centre of mass energy \r gener->SetEnergyCMS(5500.);\r// Pt transfer of the hard scattering\r gener->SetPtHard(50.,50.2);\r// Initialize generator \r return gener;\r}\r\rReading of kinematics tree as input for the particle transport\rWe suppose that the macro fastGen.C above has been used to generate the corresponding sent of files: galice.root and Kinematics.root, and that they are stored in a separate subdirectory, for example kine. Then the following code in Config.C will read the set of files and put them in the stack for transport:\r\rAliGenExtFile *gener = new AliGenExtFile(-1);\r\r gener->SetMomentumRange(0,14000);\r gener->SetPhiRange(0.,360.);\r gener->SetThetaRange(45,135);\r gener->SetYRange(-10,10);\r gener->SetOrigin(0, 0, 0); //vertex position\r gener->SetSigma(0, 0, 5.3); //Sigma in (X,Y,Z) (cm) on IP position\r\r AliGenReaderTreeK * reader = new AliGenReaderTreeK();\r reader->SetFileName("../galice.root");\r\r gener->SetReader(reader);\r gener->SetTrackingFlag(1);\r \r gener->Init();\r\rUsage of different generators\rNumerous examples are available in $ALICE_ROOT/macros/. The corresponding part can be extracted and placed in the respective Config.C file.\r\r\rReconstruction\rIn this section we describe the ALICE reconstruction framework and software.\rReconstruction Framework\rThis chapter focuses on the reconstruction framework from the (detector) software developers’ point of view. \rIf not otherwise specified, we refer to the “global ALICE coordinate system” [\ 2]. It is a right-handed coordinate system with the z-axis coinciding with the beam-pipe axis pointing away from the muon arm, the y-axis going upward, and its origin defined by the intersection point of the z-axis and the central membrane-plane of TPC.\rIn the following, we briefly summarize the main conceptual terms of the reconstruction framework (see also section \13 REF _Ref31430308 \n \h \ 1\143.2\15):\rDigit: This is a digitized signal (ADC count) obtained by a sensitive pad of a detector at a certain time.\rCluster: This is a set of adjacent (in space and/or in time) digits that were presumably generated by the same particle crossing the sensitive element of a detector.\rSpace point (reconstructed): This is the estimation of the position where a particle crossed the sensitive element of a detector (often, this is done by calculating the centre of gravity of the `cluster').\rTrack (reconstructed): This is a set of five parameters (such as the curvature and the angles with respect to the coordinate axes) of the particle's trajectory together with the corresponding covariance matrix estimated at a given point in space.\rThe input to the reconstruction framework are digits in ROOT tree format or raw data format. First, a local reconstruction of clusters is performed in each detector. Then vertexes and tracks are reconstructed and the particle identification is carried on. The output of the reconstruction is the Event Summary Data (ESD). The AliReconstruction class provides a simple user interface to the reconstruction framework which is explained in the source code.\r\ 1\rFigure \13 SEQ "Figure" \*Arabic \147\15: Reconstruction framework\rRequirements and Guidelines\rThe development of the reconstruction framework has been guided by the following requirements and code of practice:\rThe prime goal of the reconstruction is to provide the data that is needed for a physics analysis.\rThe reconstruction should be aimed at high efficiency, purity and resolution.\rThe user should have an easy-to-use interface to extract the required information from the ESD.\rThe reconstruction code should be efficient and maintainable.\rThe reconstruction should be as flexible as possible. It should be possible to do the reconstruction in one detector even if other detectors are not operational. To achieve such a flexibility each detector module should be able to\rfind tracks starting from seeds provided by another detector (external seeding),\rfind tracks without using information from other detectors (internal seeding),\rfind tracks from external seeds and add tracks from internal seeds\rand propagate tracks through the detector using the already assigned clusters in inward and outward direction.\rWhere it is appropriate, common (base) classes should be used in the different reconstruction modules.\rInterdependencies between the reconstruction modules should be minimized. If possible the exchange of information between detectors should be done via a common track class.\rThe chain of reconstruction program(s) should be callable and steerable in an easy way.\rThere should be no assumptions on the structure or names of files or on the number or order of events.\rEach class, data member and method should have a correct, precise and helpful html documentation.\rAliReconstructor\rThe base class AliReconstructor defines the interface from the steering class AliReconstruction to the detector-specific reconstruction code. For each detector, there is a derived reconstructor class. The user can set options for each reconstructor as string parameter that is accessible inside the reconstructor via the method GetOption\13 TC "GetOption" \f "A" \l 1 \15. \rThe detector specific reconstructors are created via plug-ins. Therefore they must have a default constructor. If no plug-in handler is defined by the user (in .rootrc), it is assumed that the name of the reconstructor for detector <DET> is Ali<DET>Reconstructor and that it is located in the library lib<DET> (or lib<DET>.so in case the libraries of the detector have not been split and are all bundled in a single one).\rInput Data\rIf the input data is provided in format of ROOT trees, either the loaders or directly the trees are used to access the digits. In case of raw data input, the digits are accessed via a raw reader. \rIf a galice.root file exists, the run loader will be retrieved from it. Otherwise the run loader and the headers will be created from the raw data. The reconstruction cannot work if there is no galice.root file and no raw data input.\rOutput Data\rThe clusters (rec. points) are considered intermediate output and are stored in ROOT trees handled by the loaders. The final output of the reconstruction is a tree with objects of type AliESD stored in the file AliESDs.root. This Event Summary Data (ESD) contains lists of reconstructed tracks/particles and global event properties. The detailed description of the ESD can be found in section ESD.\rLocal Reconstruction (Clusterization)\rThe first step of the reconstruction is the so-called “local reconstruction”. It is executed for each detector separately and without exchanging information with other detectors. Usually the clusterization is done in this step. \rThe local reconstruction is invoked via the method Reconstruct of the reconstructor object. Each detector reconstructor runs the local reconstruction for all events. The local reconstruction method is only called if the method HasLocalReconstruction of the reconstructor returns kTRUE. \rInstead of running the local reconstruction directly on raw data, it is possible to first convert the raw data digits into a digits tree and then to call the Reconstruct method with a tree as input parameter. This conversion is done by the method ConvertDigits. The reconstructor has to announce that it can convert the raw data digits by returning kTRUE in the method HasDigitConversion.\rVertexing\rThe current reconstruction of the primary-vertex position in ALICE is done using the information provided by the silicon pixel detectors, which constitute the two innermost layers of the ITS.\rThe algorithm starts by looking at the distribution of the z-coordinates of the reconstructed space points in the first pixel layers. At a vertex having z-coordinate ztrue=0, the distribution is symmetric and its centroid (zcen) is very close to the nominal vertex position. When the primary vertex is moved along the z-axis, an increasing fraction of hits will be lost and the centroid of the distribution no longer gives the primary vertex position. However, for primary vertex locations not too far from ztrue=0 (up to about 12 cm), the centroid of the distribution is still correlated to the true vertex position. The saturation effect at large ztrue values of the vertex position ztrue=12–15 cm) is, however, not critical, since this procedure is only meant to find a rough vertex position, in order to introduce some cut along z.\rTo find the final vertex position, the correlation between the points z1, z2 in the two layers was considered. More details and performance studies are available in [\ 2].\rA vertexer object derived from AliVertexer reconstructs the primary vertex. After the local reconstruction has been done for all detectors, the vertexer method FindVertexForCurrentEvent is called for each event. It returns a pointer to a vertex object of type AliESDVertex.\rThe vertexer object is created by the method CreateVertexer of the reconstructor. So far, only the ITS is taken into account to determine the primary vertex (AliITSVertexerZ class).\rThe precision of the primary vertex reconstruction in the bending plane required for the reconstruction of D and B mesons in p–p events can be achieved only after the tracking is done. The method is implemented in AliITSVertexerTracks. It is called as a second estimation of the primary vertex. The details of the algorithm can be found in Appendix VertexerTracks.\rCombined Track Reconstruction\rThe combined track reconstruction tries to accumulate the information from different detectors in order to optimize the track reconstruction performance. The result of this is stored in the combined track objects. The AliESDTrack class also provides the possibility to exchange information between detectors without introducing dependencies between the reconstruction modules. This is achieved by using just integer indexes pointing to the specific track objects, which allows the retrieval of the information as needed. The list of combined tracks can be kept in memory and passed from one reconstruction module to another. The storage of the combined tracks should be done in the standard way\ 5.\rThe classes responsible for the reconstruction of tracks are derived from AliTracker. They are created by the method CreateTracker of the reconstructors. The reconstructed position of the primary vertex is made available to them via the method SetVertex. Before the track reconstruction in a detector starts, clusters are loaded from the clusters tree by means of the method LoadClusters. After the track reconstruction, the clusters are unloaded by the method UnloadClusters.\rThe track reconstruction (in the barrel part) is done in three passes. The first pass consists of a track finding and fitting in inward direction in TPC and then in the ITS. The virtual method Clusters2Tracks (belonging to the class AliTracker) provides the interface to this pass. The method for the next pass is PropagateBack. It does the track reconstruction in outward direction and is invoked for all detectors starting with the ITS. The last pass is the track refit in inward direction in order to get the track parameters at the vertex. The corresponding method RefitInward is called for TRD, TPC and ITS. All three track-reconstruction methods have an AliESD object as argument that is used to exchange track information between detectors without introducing code dependencies between the detector trackers.\rDepending on the way the information is used, the tracking methods can be divided into two large groups: global methods and local methods. Each group has advantages and disadvantages.\rWith the global methods, all track measurements are treated simultaneously and the decision to include or exclude a measurement is taken when all the information about the track is known. Typical algorithms belonging to this class are combinatorial methods, Hough transform, templates, and conformal mappings. The advantages are the stability with respect to noise and mis-measurements, and the possibility to operate directly on the raw data. On the other hand, these methods require a precise global track model. Sometimes, such a track model is unknown or does not even exist because of stochastic processes (energy losses, multiple scattering), non-uniformity of the magnetic field etc. In ALICE, global tracking methods are being extensively used in the High-Level Trigger (HLT) software. There, we are mostly interested in the reconstruction of the high-momentum tracks, the required precision is not crucial, but the speed of the calculations is of great importance.\rLocal methods do not require the knowledge of the global track model. The track parameters are always estimated “locally” at a given point in space. The decision to accept or reject a measurement is made using either the local information or the information coming from the previous `history' of this track. With these methods, all the local track peculiarities (stochastic physics processes, magnetic fields, detector geometry) can naturally be accounted for. Unfortunately, local methods rely on sophisticated space point reconstruction algorithms (including unfolding of overlapped clusters). They are sensitive to noise, wrong or displaced measurements and the precision of space point error parameterization. The most advanced kind of local track-finding methods is Kalman filtering which was introduced by P. Billoir in 1983 [\ 2].\rWhen applied to the track reconstruction problem, the Kalman-filter approach shows many attractive properties:\rIt is a method for simultaneous track recognition and fitting.\rThere is a possibility to reject incorrect space points “on the fly” during a single tracking pass. These incorrect points can appear as a consequence of the imperfection of the cluster finder or they may appear due to noise or they may be points from other tracks accidentally captured in the list of points to be associated with the track under consideration. In the other tracking methods, one usually needs an additional fitting pass to get rid of incorrectly assigned points.\rIn case of substantial multiple scattering, track measurements are correlated and therefore large matrices (of the size of the number of measured points) need to be inverted during a global fit. In the Kalman-filter procedure we only have to manipulate up to 5 x 5 matrices (although as many times as we have measured space points), which is much faster.\rOne can handle multiple scattering and energy losses in a simpler way than in the case of global methods. At each step, the material budget can be calculated and the mean correction computed accordingly.\rIt is a natural way to find the extrapolation of a track from one detector to another (for example from the TPC to the ITS or to the TRD).\rIn ALICE we require good track-finding efficiency and reconstruction precision for track down to pt=100 MeV/c. Some of the ALICE tracking detectors (ITS, TRD) have a significant material budget. Under such conditions, one can not neglect the energy losses or the multiple scattering in the reconstruction. There are also rather big dead zones between the tracking detectors, which complicate finding the continuation of the same track. For all these reasons, it is the Kalman-filtering approach that has been our choice for the offline reconstruction since 1994.\rGeneral tracking strategy\rAll parts of the reconstruction software for the ALICE central tracking detectors (the ITS, TPC and the TRD) follow the same convention for the coordinate system used. All clusters and tracks are always expressed in some local coordinate system related to a given sub-detector (TPC sector, ITS module etc). This local coordinate system is defined as follows:\rIt is a right handed-Cartesian coordinate system; \rIts origin and the z-axis coincide with that of the global ALICE coordinate system;\rThe x-axis is perpendicular to the sub-detector's “sensitive plane” (TPC pad row, ITS ladder etc).\rSuch a choice reflects the symmetry of the ALICE set-up and therefore simplifies the reconstruction equations. It also enables the fastest possible transformations from a local coordinate system to the global one and back again, since these transformations become single rotations around the z-axis. \rThe reconstruction begins with cluster finding in all of the ALICE central detectors (ITS, TPC, TRD, TOF, HMPID and PHOS). Using the clusters reconstructed at the two pixel layers of the ITS, the position of the primary vertex is estimated and the track finding starts. As described later, cluster-finding, as well as track-finding procedures performed in the detectors have some different detector-specific features. Moreover, within a given detector, on account of high occupancy and a big number of overlapping clusters, cluster finding and track finding are not completely independent: the number and positions of the clusters are completely determined only at the track-finding step.\rThe general tracking strategy is the following: We start from our best tracker device, i.e. the TPC, and there from the outer radius where the track density is minimal. First, the track candidates (“seeds”) are found. Because of the small number of clusters assigned to a seed, the precision of its parameters is not sufficient to safely extrapolate it outwards to the other detectors. Instead, the tracking stays within the TPC and proceeds towards the smaller TPC radii. Whenever possible, new clusters are associated with a track candidate at each step of the Kalman filter, if they are within a given distance from the track prolongation, and the track parameters are more and more refined. When all of the seeds are extrapolated to the inner limit of the TPC, we proceed with the ITS. The ITS tracker tries to prolong the TPC tracks as close as possible to the primary vertex. On the way to the primary vertex, the tracks are assigned additional reconstructed ITS clusters, which also improves the estimation of the track parameters.\rAfter all the track candidates from the TPC have been assigned their clusters in the ITS, a special ITS stand-alone tracking procedure is applied to the rest of the ITS clusters. This procedure tries to recover those tracks that were not found in the TPC because of the pt cut-off, dead zones between the TPC sectors, or decays. \rAt this point, the tracking is restarted from the vertex back to the outer layer of the ITS and then continued towards the outer wall of the TPC. For the track that was labelled by the ITS tracker as potentially primary, several particle-mass-dependent, time-of-flight hypotheses are calculated. These hypotheses are then used for the particle identification (PID) within the TOF detector. Once the outer radius of the TPC is reached, the precision of the estimated track parameters is sufficient to extrapolate the tracks to the TRD, TOF, HMPID and PHOS detectors. Tracking in the TRD is done in a similar way to that in the TPC. Tracks are followed till the outer wall of the TRD and the assigned clusters improve the momentum resolution further. Next, the tracks are extrapolated to the TOF, HMPID and PHOS, where they acquire the PID information. Finally, all the tracks are refitted with the Kalman filter backwards to the primary vertex (or to the innermost possible radius, in the case of the secondary tracks). This gives the most precise information about the track parameters at the point where the track appeared.\rThe tracks that passed the final refit towards the primary vertex are used for the secondary vertex (V0, cascade, kink) reconstruction. There is also an option to reconstruct the secondary vertexes “on the fly” during the tracking itself. The potential advantage of such a possibility is that the tracks coming from a secondary vertex candidate are not extrapolated beyond the vertex, thus minimizing the risk of picking up a wrong track prolongation. This option is currently under investigation.\rThe reconstructed tracks (together with the PID information), kink, V0 and cascade particle decays are then stored in the Event Summary Data (ESD). \rMore details about the reconstruction algorithms can be found in Chapter 5 of the ALICE Physics Performance Report [\13 PAGEREF _RefE40 \h \ 1\14179\15].\rFilling of ESD\rAfter the tracks have been reconstructed and stored in the AliESD object, further information is added to the ESD. For each detector the method FillESD of the reconstructor is called. Inside this method e.g. V0s are reconstructed or particles are identified (PID). For the PID, a Bayesian approach is used (see Appendix \13 REF _Ref31446906 \n \h \ 1\147.2\15. The constants and some functions that are used for the PID are defined in the class AliPID.\rMonitoring of Performance\rFor the monitoring of the track reconstruction performance, the class AliTrackReference is used. Objects of this type are created during the simulation at selected locations. The reconstructed tracks can be easily compared with the simulated particles at these locations. This allows studying and monitoring the performance of the track reconstruction in detail. The creation of the objects used for the comparison should not interfere with the reconstruction algorithm and can be switched on or off.\rSeveral “comparison” macros permit to monitor the efficiency and the resolution of the tracking. Here is a typical usage (the simulation and the reconstruction have been done in advance):\r\raliroot\r root [0] gSystem->SetIncludePath("-I$ROOTSYS/include \\r -I$ALICE_ROOT/include \\r -I$ALICE_ROOT/TPC \\r -I$ALICE_ROOT/ITS \\r -I$ALICE_ROOT/TOF")\r root [1] .L $ALICE_ROOT/TPC/AliTPCComparison.C++\r root [2] .L $ALICE_ROOT/ITS/AliITSComparisonV2.C++\r root [3] .L $ALICE_ROOT/TOF/AliTOFComparison.C++\r root [4] AliTPCComparison()\r root [5] AliITSComparisonV2()\r root [6] AliTOFComparison()\r\rAnother macro can be used to provide a preliminary estimate of the combined acceptance: STEER/CheckESD.C.\rClasses\rThe following classes are used in the reconstruction:\rAliTrackReference is used to store the position and the momentum of a simulated particle at given locations of interest (e.g. when the particle enters or exits a detector or it decays). It is used mainly for debugging and tuning of the tracking.\rAliExternalTrackParams describes the status of a track at a given point. It contains the track parameters and its covariance matrix. This parameterization is used to exchange tracks between the detectors. A set of functions returning the position and the momentum of tracks in the global coordinate system as well as the track impact parameters are implemented. There is the option to propagate the track to a given radius PropagateTo and Propagate.\rAliKalmanTrack (and derived classes) are used to find and fit tracks with the Kalman approach. The AliKalmanTrack defines the interfaces and implements some common functionality. The derived classes know about the clusters assigned to the track. They also update the information in an AliESDtrack. An AliExternalTrackParameters object can represent the current status of the track during the track reconstruction. The history of the track during reconstruction can be stored in a list of AliExternalTrackParameters objects. The AliKalmanTrack defines the methods:\rDouble_t GetDCA(...) Returns the distance of closest approach between this track and the track passed as the argument. \rDouble_t MeanMaterialBudget(...) Calculate the mean material budget and material properties between two points.\rAliTracker and subclasses: The AliTracker is the base class for all the trackers in the different detectors. It defines the interface required to find and propagate tracks. The actual implementation is done in the derived classes.\rAliESDTrack combines the information about a track from different detectors. It contains the current status of the track (AliExternalTrackParameters) and it has (non-persistent) pointers to the individual AliKalmanTrack objects from each detector that contribute to the track. It contains as well detector specific quantities like the number or bit pattern of assigned clusters, \13 EMBED \14\ 1\15, \13 EMBED \14\ 1\15, etc.. and it can calculate a conditional probability for a given mixture of particle species following the Bayesian approach. It also defines a track label pointing to the corresponding simulated particle in case of Monte Carlo. The combined track objects are the basis for a physics analysis.\rExample\rThe example below shows reconstruction with non-uniform magnetic field (the simulation is also done with non-uniform magnetic \0f\0i\0e\0l\0d\0 \0b\0y\0 \0a\0d\0d\0i\0n\0g\0 \0t\0h\0e\0 \0f\0o\0l\0l\0o\0w\0i\0n\0g\0 \0l\0i\0n\0e\0 \0i\0n\0 \0t\0h\0e\0 \0C\0o\0n\0f\0i\0g\0.\0C\0:\0 \0f\0i\0e\0l\0d\0®ðS\0e\0t\0L\03\0C\0o\0n\0s\0t\0F\0i\0e\0l\0d\0(\01\0)\0)\0.\0 \0O\0n\0l\0y\0\r\0t\0h\0e\0 \0b\0a\0r\0r\0e\0l\0 \0d\0e\0t\0e\0c\0t\0o\0r\0s\0 \0a\0r\0e\0 \0r\0e\0c\0o\0n\0s\0t\0r\0u\0c\0t\0e\0d\0,\0 \0a\0 \0s\0p\0e\0c\0i\0f\0i\0c\0 \0T\0O\0F\0 \0r\0e\0c\0o\0n\0s\0t\0r\0u\0c\0t\0i\0o\0n\0\r\0h\0a\0s\0 \0b\0e\0e\0n\0 \0r\0e\0q\0u\0e\0s\0t\0e\0d\0,\0 \0a\0n\0d\0 \0t\0h\0e\0 \0R\0A\0W\0 \0d\0a\0t\0a\0 \0h\0a\0v\0e\0 \0b\0e\0e\0n\0 \0u\0s\0e\0d\0:\0\r\0\r\0v\0o\0i\0d\0 \0r\0e\0c\0(\0)\0 \0{\0\r\0 \0A\0l\0i\0R\0e\0c\0o\0n\0s\0t\0r\0u\0c\0t\0i\0o\0n\0 \0r\0e\0c\0o\0;\0\r\0\r\0 \0r\0e\0c\0o\0.\0S\0e\0tRunReconstruction("ITS TPC TRD TOF");\r reco.SetUniformFieldTracking(0);\r reco.SetInput("raw.root");\r\r reco.Run();\r }\r\rEvent summary data\rThe classes that are needed to process and analyze the ESD are packaged in a standalone library ( which can be used independently from the AliRoot framework. Inside each ESD object, the data is stored in polymorphic containers filled with reconstructed tracks, neutral particles, etc. The main class is AliESD, which contains all the information needed during the physics analysis:\rfields to identify the event, such as event number, run number, time stamp, type of event, trigger type (mask), trigger cluster (mask), version of reconstruction, etc.;\rreconstructed ZDC energies and number of participants;\rprimary vertex information: vertex z-position estimated by the T0, primary vertex estimated by the SPD, primary vertex estimated using ESD tracks;\rtracklet multiplicity;\rinteraction time estimated by the T0 together with additional time and amplitude information from T0;\rarray of ESD tracks;\rarrays of HLT tracks, both from the conformal mapping and from the Hough transform reconstruction;\rarray of MUON tracks;\rarray of PMD tracks;\rarray of TRD ESD tracks (triggered);\rarrays of reconstructed V0 vertexes, cascade decays and kinks;\rarray of calorimeter clusters for PHOS/EMCAL;\rindexes of the information from PHOS and EMCAL detectors in the array above.\rAnalysis\rIntroduction\rThe analysis of experimental data is the final stage of event processing, and it is usually repeated many times. Analysis is a very diverse activity, where the goals of each particular analysis pass may differ significantly.\rThe ALICE detector [\13 PAGEREF _RefE1 \h \ 1\14178\15] is optimized for the reconstruction and analysis of heavy-ion collisions. In addition, ALICE has a broad physics programme devoted to p–p and p–A interactions. \rThe Physics Board coordinates data analysis via the Physics Working Groups (PWGs). At present, the following PWG have started their activity: \rPWG0 first physics;\rPWG1 detector performance;\rPWG2 global event characteristics: particle multiplicity, centrality, energy density, nuclear stopping;\vsoft physics: chemical composition (particle and resonance production, particle ratios and spectra, strangeness enhancement), reaction dynamics (transverse and elliptic flow, HBT correlations, event-by-event dynamical fluctuations);\rPWG3 heavy flavors: quarkonia, open charm and beauty production.\rPWG4 hard probes: jets, direct photons;\rEach PWG has corresponding module in AliRoot (PWG0 – PWG4). CVS administrators manage the code.\rThe p–p and p–A programme will provide, on the one hand, reference points for comparison with heavy ions. On the other hand, ALICE will also pursue genuine and detailed p–p studies. Some quantities, in particular the global characteristics of interactions, will be measured during the first days of running, exploiting the low-momentum measurement and particle identification capabilities of ALICE.\rThe ALICE computing framework is described in details in the Computing Technical Design Report [\13 PAGEREF _RefE2 \h \ 1\14178\15]. This section is based on Chapter 6 of the document.\rThe analysis activity\rWe distinguish two main types of analysis: scheduled analysis and chaotic analysis. They differ in their data access pattern, in the storage and registration of the results, and in the frequency of changes in the analysis code (more details are available below).\rIn the ALICE computing model, the analysis starts from the Event Summary Data (ESD). These are produced during the reconstruction step and contain all the information for analysis. The size of the ESD is about one order of magnitude lower than the corresponding raw data. The analysis tasks produce Analysis Object Data (AOD), specific to a given set of physics objectives. Further passes for the specific analysis activity can be performed on the AODs, until the selection parameter or algorithms are changed.\rA typical data analysis task usually requires processing of selected sets of events. The selection is based on the event topology and characteristics, and is done by querying the tag database. The tags represent physics quantities which characterize each run and event, and permit fast selection. They are created after the reconstruction and contain also the unique identifier of the ESD file. A typical query, when translated into natural language, could look like “Give me all the events with impact parameter in <range> containing jet candidates with energy larger than <threshold>”. This results in a list of events and file identifiers to be used in the consecutive event loop. \rThe next step of a typical analysis consists of a loop over all the events in the list and calculation of the physics quantities of interest. Usually, for each event, there is a set of embedded loops over the reconstructed entities such as tracks, V0 candidates, neutral clusters, etc., the main goal of which is to select the signal candidates. Inside each loop, a number of criteria (cuts) are applied to reject the background combinations and to select the signal ones. The cuts can be based on geometrical quantities such as impact parameters of the tracks with respect to the primary vertex, distance between the cluster and the closest track, distance-of-closest approach between the tracks, angle between the momentum vector of the particle combination and the line connecting the production and decay vertexes. They can also be based on kinematics quantities, such as momentum ratios, minimal and maximal transverse momentum, angles in the rest frame of the particle combination. Particle identification criteria are also among the most common selection criteria.\rThe optimization of the selection criteria is one of the most important parts of the analysis. The goal is to maximize the signal-to-background ratio in case of search tasks, or another ratio (typically \13 EMBED Equation.DSMT4 \14\ 1\15) in case of measurement of a given property. Usually, this optimization is performed using simulated events where the information from the particle generator is available. \rAfter optimization of the selection criteria, one has to take into account the combined acceptance of the detector. This is a complex, analysis-specific quantity which depends on geometrical acceptance, trigger efficiency, decays of particles, reconstruction efficiency, efficiency of the particle identification and of the selection cuts. The components of the combined acceptance are usually parameterized and their product is used to unfold the experimental distributions, or during the simulation of model parameters. \rThe last part of the analysis usually involves quite complex mathematical treatment, and sophisticated statistical tools. At this point, one may include the correction for systematic effects, the estimation of statistical and systematic errors, etc.\rScheduled analysis\rThe scheduled analysis typically uses all the available data from a given period and stores and registers the results using Grid middleware. The tag database is updated accordingly. The AOD files generated during the scheduled analysis can be used by several subsequent analyses, or by a class of related physics tasks. The procedure of scheduled analysis is centralized and can be understood as data filtering. The requirements come from the PWGs and are prioritized by the Physics Board, taking into account the available computing and storage resources. The analysis code will be tested in advance and released before the beginning of the data processing.\rEach PWG will require some sets of AOD per event, which are specific for one or several analysis tasks. The creation of those AOD sets is managed centrally. The event list of each AOD set will be registered and the access to the AOD files will be granted to all ALICE collaborators. AOD files will be generated at different computing centres and will be stored on the corresponding storage elements. The processing of each file set will thus be done in a distributed way on the Grid. Some of the AOD sets may be so small that they would fit on a single storage element or even on one computer. In this case, the corresponding tools for file replication, available in the ALICE Grid infrastructure, will be used.\rChaotic analysis\rThe chaotic analysis is focused on a single physics task and is typically based on the filtered data from the scheduled analysis. Each physicist also may access directly large parts of the ESD in order to search for rare events or processes. Usually the user develops the code using a small subsample of data, and changes the algorithms and criteria frequently. The analysis macros and software are tested many times on relatively small data volumes, both experimental and Monte Carlo. In many cases, the output is only a set of histograms. Such a tuning of the analysis code can be done on a local data set or on distributed data using Grid tools. The final version of the analysis will eventually be submitted to the Grid and will access large portions or even the totality of the ESDs. The results may be registered in the Grid file catalogue and used at later stages of the analysis. This activity may or may not be coordinated inside the PWGs, via the definition of priorities. The chaotic analysis is carried out within the computing resources of the physics groups.\rInfrastructure tools for distributed analysis\rgShell\rThe main infrastructure tools for distributed analysis have been described in Chapter 3 of the Computing TDR [\13 PAGEREF _RefE2 \h \ 1\14178\15]. The actual middleware is hidden by an interface to the Grid, gShell [\ 2], which provides a single working shell. The gShell package contains all the commands a user may need for file catalogue queries, creation of sub-directories in the user space, registration and removal of files, job submission and process monitoring. The actual Grid middleware is completely transparent to the user.\rThe gShell overcomes the scalability problem of direct client connections to databases. All clients connect to the gLite [\ 2] API services. This service is implemented as a pool of pre-forked server daemons, which serve single-client requests. The client-server protocol implements a client state, which is represented by a current working directory, a client session ID and time-dependent symmetric cipher on both ends to guarantee privacy and security. The daemons execute client calls with the identity of the connected client. \rPROOF – the Parallel ROOT Facility\rThe Parallel ROOT Facility (PROOF [\ 2]) has been specially designed and developed to allow the analysis and mining of very large data sets, minimizing response time. It makes use of the inherent parallelism in event data and implements an architecture that optimizes I/O and CPU utilization in heterogeneous clusters with distributed storage. The system provides transparent and interactive access to terabyte-scale data sets. Being part of the ROOT framework, PROOF inherits the benefits of a performing object storage system and a wealth of statistical and visualization tools. The most important design features of PROOF are:\rtransparency – no difference between a local ROOT and a remote parallel PROOF session;\rscalability – no implicit limitations on number of computers used in parallel;\radaptability – the system is able to adapt to variations in the remote environment.\rPROOF is based on a multi-tier architecture: the ROOT client session, the PROOF master server, optionally a number of PROOF sub-master servers, and the PROOF worker servers. The user connects from the ROOT session to a master server on a remote cluster, and the master server creates sub-masters and worker servers on all the nodes in the cluster. All workers process queries in parallel and the results are presented to the user as coming from a single server.\rPROOF can be run either in a purely interactive way, with the user remaining connected to the master and worker servers and the analysis results being returned to the user's ROOT session for further analysis, or in an `interactive batch' way where the user disconnects from the master and workers (see \13 REF _Ref31450506 \h \ 1\14Figure 8\15). By reconnecting later to the master server the user can retrieve the analysis results for that particular query. The latter mode is useful for relatively long running queries (several hours) or for submitting many queries at the same time. Both modes will be important for the analysis of ALICE data.\r\ 1\ 5\rFigure \13 SEQ "Figure" \*Arabic \148\15: Setup and interaction with the Grid middleware of a user PROOF session distributed over many computing centres. \rAnalysis tools\rThis section is devoted to the existing analysis tools in ROOT and AliRoot. As discussed in the introduction, some very broad analysis tasks include the search for rare events (in this case, the physicist tries to maximize the signal-to-background ratio), or measurements where it is important to maximize the signal significance. The tools that provide possibility to apply certain selection criteria, and to find the interesting combinations within a given event are described below. Some of them are very general and used in many different places, for example the statistical tools. Others are specific to a given analysis or to a certain analysis mode.\rStatistical tools\rSeveral commonly used statistical tools are available in ROOT [\13 PAGEREF _RefE3 \h \ 1\14178\15]. ROOT provides classes for efficient data storage and access, such as trees (TTree) and ntuples (TNtuple). The ESD information is organized in a tree, where each event is a separate entry. This allows a chain of ESD files to be made and the elaborated selector mechanisms to be used in order to exploit the PROOF services. The tree classes permit easy navigation, selection, browsing, and visualization of the data in the branches. \rROOT also provides histogramming and fitting classes, which are used for the representation of all the one- and multi-dimensional distributions, and for extraction of their fitted parameters. ROOT provides an interface to powerful and robust minimization packages, which can be used directly during special parts of the analysis. A special fitting class allows one to decompose an experimental histogram as a superposition of source histograms.\rROOT also provides a set of sophisticated statistical analysis tools such as principal component analysis, robust estimator and neural networks. The calculation of confidence levels is provided as well.\rAdditional statistical functions are included in TMath.\rCalculations of kinematics variables\rThe main ROOT physics classes include 3-vectors, Lorentz vectors and operations such as translation, rotation and boost. The calculations of kinematics variables such as transverse and longitudinal momentum, rapidity, pseudo-rapidity, effective mass, and many others are provided as well.\rGeometrical calculations\rThere are several classes which can be used for measurement of the primary vertex: AliITSVertexerZ, AliITSVertexerIons, AliITSVertexerTracks, etc. A fast estimation of the z-position can be done by AliITSVertexerZ, which works for both lead–lead and proton–proton collisions. A universal tool is provided by AliITSVertexerTracks, which calculates the position and covariance matrix of the primary vertex based on a set of tracks, and estimates the \13 EMBED \14\ 1\15 contribution of each track as well. An iterative procedure can be used to remove the secondary tracks and improve the precision. \rTrack propagation towards the primary vertex (inward) is provided by AliESDtrack.\rThe secondary vertex reconstruction in case of V0 is provided by AliV0vertexer, and, in case of cascade hyperons, by AliCascadeVertexer. AliITSVertexerTracks can be used to find secondary vertexes close to the primary one, for example decays of open charm like \13 EMBED Equation.DSMT4 \14\ 1\15 or \13 EMBED Equation.DSMT4 \14\ 1\15. Calculation of impact parameters with respect to the primary vertex is done during reconstruction, and this information is available in AliESDtrack. It is then possible to recalculate the impact parameter during ESD analysis, after an improved determination of the primary vertex position using reconstructed ESD tracks. \rGlobal event characteristics\rThe impact parameter of the interaction and the number of participants are estimated from the energy measurements in the ZDC. In addition, the information from the FMD, PMD, and T0 detectors is available. It gives a valuable estimate of the event multiplicity at high rapidities and permits global event characterization. Together with the ZDC information, it improves the determination of the impact parameter, the number of participants, and the number of binary collisions.\rThe event plane orientation is calculated by the AliFlowAnalysis class.\rComparison between reconstructed and simulated parameters\rThe comparison between the reconstructed and simulated parameters is an important part of the analysis. It is the only way to estimate the precision of the reconstruction. Several example macros exist in AliRoot and can be used for this purpose: AliTPCComparison.C, AliITSComparisonV2.C, etc. As a first step in each of these macros, the list of so-called “good tracks” is built. The definition of a good track is explained in detail in the ITS [\ 2] and TPC [\ 2] Technical Design Reports. The essential point is that the track goes through the detector and can be reconstructed. Using the “good tracks”, one then estimates the efficiency of the reconstruction and the resolution.\rAnother example is specific to the MUON arm: the MUONRecoCheck.C macro compares the reconstructed muon tracks with the simulated ones.\rThere is also the possibility to calculate directly the resolutions without additional requirements on the initial track. One can use the so-called track label and retrieve the corresponding simulated particle directly from the particle stack (AliStack).\rEvent mixing\rOne particular analysis approach in heavy-ion physics is the estimation of the combinatorial background using event mixing. Part of the information (for example the positive tracks) is taken from one event, another part (for example the negative tracks) is taken from a different, but “similar”, event. The event “similarity” is very important, because only in this case, the combinations produced from different events represent the combinatorial background. Under “similar” in the example above we understand events with the same multiplicity of negative tracks. In addition, one may require similar impact parameters of the interactions, rotation of the tracks of the second event to adjust the event plane, etc. The possibility for event mixing is provided in AliRoot by the fact that the ESD is stored in trees, and one can chain and access simultaneously many ESD objects. Then, the first pass would be to order events according to the desired criterion of “similarity” and to use the obtained index for accessing the “similar” events in the embedded analysis loops. An example of event mixing is shown in \13 REF _Ref31451623 \h \ 1\14Figure 9\15. The background distribution has been obtained using “mixed events”. The signal distribution has been taken directly from the Monte Carlo simulation. The “experimental distribution” has been produced by the analysis macro and decomposed as a superposition of the signal and background histograms.\r\ 1\rFigure \13 SEQ "Figure" \*Arabic \149\15: Mass spectrum of the \13 EMBED \14\ 1\15 meson candidates produced inclusively in the proton–proton interactions.\rAnalysis of the High-Level Trigger (HLT) data\rThis is a specific analysis that is needed either in order to adjust the cuts in the HLT code, or to estimate the HLT efficiency and resolution. AliRoot provides a transparent way of doing such an analysis, since the HLT information is stored in the form of ESD objects in a parallel tree. This also helps in the monitoring and visualization of the results of the HLT algorithms.\rEVE – Event Visualization Environment\rEVE is composed of:\ra small application kernel;\rgraphics classes with editors and OpenGL renderers;\rCINT scripts that extract data, fill graphics classes and register them to the application.\rBecause the framework is still evolving at this point, some things might not work as expected. The usage is the following:\rInitialize ALICE environment.\rRun the “alieve” executable, which should be in the path, and run the alieve_init.C macro, for example:\rTo load the first event from the current directory:\r# alieve alieve_init.C\rTo load the 5th event from directory /data/my-pp-run:\r# alieve 'alieve_init.C("/data/my-pp-run", 5)' \rInteractively:\r# alieve\r root[0] .L alieve_init.C\r root[1] alieve_init("/somedir")\rUse the GUI or the CINT command-line to invoke further visualization macros.\rTo navigate through the events use macros 'event_next.C' and 'event_prev.C'. These are equivalent to the command-line invocations:\rroot[x] Alieve::gEvent->NextEvent()\ror\rroot[x] Alieve::gEvent->PrevEvent()\rThe general form to access an event via its number is:\rroot[x] Alieve::gEvent->GotoEvent(<event-number>)\rSee the files in EVE/alice-macros. For specific use cases, these should be edited to suit your needs.\rDirectory structure\rEVE is split into two modules: TEve (ROOT part, not dependent on AliROOT) and ALIEVE (ALICE specific part). \rALIEVE/ sources\rmacros/ macros for bootstraping and internal steering\ralice-macros/ macros for ALICE visualization\ralice-data/ data files used by ALICE macros\rtest-macros/ macros for tests of specific features; usually one needs to copy and edit them\rbin/, Makefile and used for stand-alone build of the packages.\rNote that a failed macro-execution can leave CINT in a poorly defined state that prevents further execution of macros. For example:\r Exception Reve::Exc_t: Event::Open failed opening ALICE ESDfriend from\r '/alice-data/coctail_10k/AliESDfriends.root'.\r\r root [1] Error: Function MUON_geom() is not defined in current scope :0:\r *** Interpreter error recovered ***\r Error: G__unloadfile() File "/tmp/MUON_geom.C" not loaded :0:\r“gROOT->Reset()” helps in most of the cases.\rData input, output and exchange subsystem of AliRoot\rThis section, in its original form, was published in  [\ 2].\rA few tens of different data types are present within AliRoot, because hits, summable digits, digits and clusters are characteristic for each sub-detector. Writing all of the event data to a single file causes a number of limitations. Moreover, the reconstruction chain introduces rather complicated dependencies between different components of the framework, what is highly undesirable from the point of view of software design. In order to solve both problems, we have designed a set of classes that manage data manipulation, i.e. storage, retrieval and exchange within the framework. \rIt was decided to use the “white board” concept, which is a single exchange object where all data are stored and made publicly accessible. For that purpose, we have employed TFolder facility of ROOT. This solution solves the problem of inter-module dependencies.\rThere are two frequently occurring use-cases regarding data-flow within the framework:\rdata production: produce - write - unload (clean)\rdata processing: load (retrieve) - process - unload\rLoaders are utility classes that encapsulate and automate those tasks. They reduce the user's interaction with the I/O routines to the necessary minimum, providing a friendly and manageable interface, which for the above use-cases, consists of only 3 methods:\rLoad – retrieves the requested data to the appropriate place in the white board (folder)\rUnload – cleans the data\rWrite – writes the data\rSuch an insulation layer has number of advantages:\rfacilitate data access,\ravoid the code duplication in the framework,\rminimize the risk of bugs in I/O management. The ROOT object oriented data storage extremely simplifies the user interface, however, there are a few pitfalls that are not well known to inexperienced users.\rTo begin with, we need to introduce briefly basic concepts and the way AliRoot operates. The basic entity is the event, i.e. all data recorded by the detector in a certain time interval plus all the information reconstructed from these data. Ideally, the data are produced by a single collision selected by a trigger for recording. However, it may happen that the data from the previous or proceeding events are present, because the bunch-crossing rate is higher than the maximum detector frequency (pile-up), or simply more than one collision occurred within one bunch crossing.\rInformation describing the event and the detector state is also stored, like bunch crossing number, magnetic field, configuration, alignment, etc. In the case of Monte-Carlo simulated data, information concerning the generator simulation parameters is also kept. Altogether, this data is called the “header”.\rIn case of collisions that produce only a few tracks (best example are the pp collisions), it may happen that the total overhead (the size of the header and the ROOT structures supporting object oriented data storage) is not negligible compared to the data itself. To avoid such situations, the possibility of storing an arbitrary number of events together within a run is required. Hence, the common data can be written only once per run and several events can be written to a single file.\rIt was decided that data related to different detectors and different processing phases should be stored in different files. In such a case, only the required data need to be downloaded for an analysis. It also allows for practical altering of the files if required, for example, when a new version of reconstruction or simulation has to be run for a given detector. Hence, only modified files are updated and all the rest remains untouched. It is especially important, because it is difficult to erase files in mass storage systems. This also provides for easy comparison with data produced by competing algorithms.\rHeader data, configuration and management objects are stored in a separate file, which is usually named galice.root (for simplicity we will further refer to it as galice).\rThe “White Board”\rThe folder structure is shown in \13 REF _Ref31534837 \h \ 1\14Figure 10\15. It is divided into two parts:\revent data that have the scope of single event\rstatic data that do not change from event to event, i.e. geometry and alignment, calibration, etc.\rDuring start-up of AliRoot the skeleton structure of the ALICE white board is created. The AliConfig class (singleton) provides all the functionality that is needed to construct the folder structures.\rEvent data are stored under a single subfolder (the event folder), named as specified by the user when opening a session (run). Many sessions can be opened at the same time, provided that each of them has an unique event folder name, so they can be distinguished. This functionality is crucial for superimposing events on the level of the summable digits, i.e. analogue detector response without the noise contribution (event merging). It is also useful when two events, or the same event either simulated or reconstructed using different algorithms, need to be compared.\r\ 1\rFigure \13 SEQ "Figure" \*Arabic \1410\15: The folders structure. An example event is mounted under the “Event” folder.\rLoaders\rLoaders can be represented as a four layer, tree like structure (see \13 REF _Ref31535120 \h \ 1\14Figure 11\15). It represents the logical structure of the detector and the data association.\r\ 1\rFigure \13 SEQ "Figure" \*Arabic \1411\15: Loaders diagram. Dashed lines separate layers serviced by the different types of the loaders (from top): AliRunLoader, AliLoader, AliDataLoader, AliBaseLoader.\rAliBaseLoader – One base loader is responsible for posting (finding in a file and publishing in a folder) and writing (finding in a folder and putting in a file) of a single object. AliBaseLoader is a pure virtual class because writing and posting depend on the type of an object. the following concrete classes are currently implemented:\rAliObjectLoader – It handles TObject, i.e. any object within ROOT and AliRoot since an object must inherit from this class to be posted to the white board (added to TFolder).\rAliTreeLoader – It is the base loader for TTrees, which requires special handling, because they must be always properly associated with a file.\rAliTaskLoader – It handles TTask, which need to be posted to the appropriate parent TTask instead of TFolder.\rAliBaseLoader stores the name of the object it manages in its base class TNamed to be able to find it in a file or folder. The user normally does not need to use these classes directly and they are rather utility classes employed by AliDataLoader.\rAliDataLoader manages a single data type, for example digits for a detector or kinematics tree. Since a few objects are normally associated with a given data type (data itself, quality assurance data (QA), a task that produces the data, QA task, etc.). AliDataLoader has an array of AliBaseLoader, so each of them is responsible for each object. Hence, AliDataLoader can be configured to meet the specific requirements of a certain data-type.\rA single file contains data corresponding to a determined processing phase and of one specific detector only. By default, the file is named according to the schema Detector Name + Data Name + .root but it can be changed during run-time if needed. Doing so, the data can be stored in or retrieved from an alternative source. When needed, the user can limit the number of events stored in a single file. If the maximum number is exceeded, the current file will be closed and a new one will be created with the consecutive number added to the name of the first one (before the .root suffix). Of course, during the reading process, files are also automatically interchanged behind the scenes, invisible to the user.\rThe AliDataLoader class performs all the tasks related to file management e.g. opening, closing, ROOT directories management, etc. Hence, for each data type the average file size can be tuned. This is important, because it is on the one hand undesirable to store small files on the mass storage systems, on the other hand, all file systems have a maximum allowed file size.\r AliLoader manages all data associated with a single detector (hits, digits, summable digits, reconstructed points, etc.). It contains an array of AliDataLoader, and each of them manages a single data-type.\rThe AliLoader object is created by a class representing a detector (inheriting from AliDetector). Its functionality can be extended and customized to the needs of a particular detector by creating a specialized class that derives from AliLoader. The default configuration can be easily modified either in AliDetector::MakeLoader or by overriding the method AliLoader::InitDefaults.\rAliRunLoader is the main handler for data access and manipulation in AliRoot. There is only one such an object for each run. It is always named RunLoader and stored on the top (ROOT) directory of a galice file.\rIt keeps an array of AliLoader's, one for each detector, manages the event data that are not associated with any detector – i.e. Kinematics and Header – and utilizes instances of AliDataLoader for this purpose.\rThe user opens a session using the static method AliRunLoader::Open, which takes three parameters: file name, event folder name and mode. If mode is “new”, a file and a run loader are created from scratch. Otherwise, a file is opened and a run loader gets looked-up inside the file. If successful, the event folder is created under the name provided (if it does not exist yet), and the structure presented in \13 REF _Ref31535120 \h \ 1\14Figure 11\15 is created within the folder. The run loader is put into the event folder, so the user can always find it there and use it for further data management.\rAliRunLoader provides the simple method GetEvent(n) to loop over events within a run. A call clears all currently loaded data and automatically posts the data for the newly requested event.\rIn order to facilitate the way the user interacts with the loaders, AliRunLoader provides a set of shortcut methods. For example, if digits are required to be loaded, the user can call AliRunLoader::LoadDigits("ITS TPC") instead of finding the appropriate AliDataLoader's responsible for digits for ITS and TPC, and then request to load the data for each of them.\rCalibration and alignment\rCalibration framework\rThe calibration framework is based on the following principles:\rThe calibration and alignment database contains instances of ROOT TObject stored in ROOT files.\rCalibration and alignment objects are run-dependent objects.\rThe database is read-only and provides for automatic versioning of the stored objects.\rThree different data storage structures are available:\ra GRID folder containing ROOT files, each one containing one single ROOT object. The ROOT files are created inside a directory tree, defined by the object's name and run validity range;\ra LOCAL folder containing ROOT files, each one containing one single ROOT object, with a structure similar to the Grid one;\ra LOCAL ROOT file containing one or more objects (so-called “dump”). The objects are stored into ROOT TDirectories defined by the object's name and run range.\rObject storing and retrieval techniques are transparent to the user: he/she should only specify the kind of storage he wants to use (“Grid”, “local”, “dump”). Objects are stored and retrieved using AliCDBStorage::Put and AliCDBStorage::Get. Multiple objects can be retrieved using AliCDBStorage::GetAll.\rDuring object retrieval, it is possible to specify a particular version by means of one or more selection criteria.\rThe main features of the CDB storage classes are the following [\ 2]:\rAliCDBManager is a singleton that handles the instantiation, usage and destruction of all the storage classes. It allows the instantiation of more than one storage type at a time, keeping track of the list of active storages. The instantiation of a storage element is done by means of AliCDBManager::GetStorage. A storage element is identified by its “URI” (a string) or by its “parameters”. The set of parameters defining each storage is contained in its specific AliCDBParam class (AliCDBGridParam, AliCDBLocalParam, AliCDBDumpParam).\rIn order to avoid version clashes when objects are transferred from Grid to local and vice versa, we have introduced a new versioning schema. Two version numbers define the object: a “Grid” version and a “Local” version (sub-version). In local storage, only the local version is increased, while in Grid storage, only the Grid version is increased. When the object is transferred from local to Grid the Grid version is increased by one; when the object is transferred from Grid to Local the Grid version is kept and the sub-version is reset to zero.\rAliCDBEntry is the container-class of the object and its metadata, whereas the metadata of the object has been divided into two classes: AliCDBId contains data used to identify the object during storage or retrieval, and AliCDBMetaData holds other metadata which is not used during storage and retrieval.\rThe AliCDBId object in turn contains:\rAn object describing the name (path) of the object (AliCDBPath). The path name must have a fixed, three-level directory structure: “level1/level2/level3”\rAn object describing the run validity range of the object (AliCDBRunRange)\rThe version and subversion numbers (automatically set during storage)\rA string (fLastStorage) specifying from which storage the object was retrieved (“new”, “Grid”, “local”, “dump”)\rThe AliCDBId object has two functions:\rDuring storage it is used to specify the path and run range of the object;\rDuring retrieval it is used as a “query”: it contains the path of the object, the required run and if needed the version and subversion to be retrieved (if version and/or subversion are not specified, the highest ones are looked up).\rHere, we give some usage examples:\rA pointer to the single instance of the AliCDBManager class is obtained by invoking AliCDBManager::Instance().\rA storage is activated and a pointer to it is returned using the AliCDBManager::GetStorage(const char* URI) method. Here are some examples of how to activate a storage via an URI string. The URI's must have a well defined syntax, for example (local cases):\r“local://DBFolder” to local storage with base directory “DBFolder” created (if not existing from the working directory)\r“local://$ALICE_ROOT/DBFolder” to local storage with base directory “$ALICE_ROOT/DBFolder” (full path name)\r“dump://DBFile.root” to Dump storage. The file DBFile.root is looked for or created in the working directory if the full path is not specified\r“dump://DBFile.root;ReadOnly” to Dump storage. DBFile.root is opened in read only mode.\rConcrete examples (local case):\rAliCDBStorage *sto = \r AliCDBManager::Instance()->GetStorage("local://DBFolder");\r \r AliCDBStorage *dump = \r AliCDBManager::Instance()->GetStorage("dump:///data/DBFile.root;ReadOnly");\rCreation and storage of an object, how an object can be created and stored in a local database: \rLet's suppose our object is an AliZDCCalibData object (container of arrays of pedestals constants), whose name is “ZDC/Calib/Pedestals” and is valid for run 1 to 10.\r AliZDCCalibData *calibda = new AliZDCCalibData();\r // ... filling calib data...\r\r // creation of the AliCDBId object (identifier of the object)\r AliCDBId id("ZDC/Calib/Pedestals",1,10);\r\r // creation and filling of the AliCDBMetaData\r AliCDBMetaData *md = new AliCDBMetaData();\r md->Set... // fill metadata object, see list of setters...\r\r // Activation of local storage\r AliCDBStorage *sto = \r AliCDBManager::Instance()->GetStorage("local://$HOME/DBFolder");\r\r // put object into database\r sto->Put(calibda, id, md);\rThe object is stored into local file: $HOME/DBFolder/ZDC/Calib/Pedestals/Run1_10_v0_s0.root\rRetrieval of an object:\r// Activation of local storage\r AliCDBStorage *sto =\r AliCDBManager::Instance()->GetStorage("local://$HOME/DBFolder");\r \r // Get the AliCDBEntry which contains the object "ZDC/Calib/Pedestals", valid for run 5, highest version\r AliCDBEntry* entry = sto->Get("ZDC/Calib/Pedestals",5)\r // alternatively, create an AliCDBId query and use sto->Get(query) ...\r\r // specifying the version: I want version 2\r AliCDBEntry* entry = sto->Get("ZDC/Calib/Pedestals",5,2)\r\r // specifying version and subversion: I want version 2 and subVersion 1\r AliCDBEntry* entry = sto->Get("ZDC/Calib/Pedestals",5,2,1)\rSelection criteria can be also specified using AliCDBStorage::AddSelection (see also the methods RemoveSelection, RemoveAllSelections and PrintSelectionList):\r// I want version 2_1 for all "ZDC/Calib/*" objects for runs 1-100\r sto->AddSelection("ZDC/Calib/*",1,100,2,1);\r // and I want version 1_0 for "ZDC/Calib/Pedestals" objects for runs 5-10\r sto->AddSelection("ZDC/Calib/Pedestals",5,10,1,0)\r \r AliCDBEntry* entry = sto->Get("ZDC/Calib/Pedestals",5)\r\r\rRetrieval of multiple objects with AliCDBStorage::GetAll\rTList *list = sto->GetAll("ZDC/*",5)\rUse of default storage and drain storages: \rAliCDBManager allows to set pointers to a “default storage” and to a “drain storage”. In particular, if the drain storage is set, all the retrieved objects are automatically stored into it.\rThe default storage is automatically set as the first active storage. \r\rExamples of how to use default and drain storage:\rAliCDBManager::Instance()->SetDefaultStorage("local://$HOME/DBFolder");\rAliCDBManager::Instance()->SetDrain("dump://$HOME/DBDrain.root");\rAliCDBEntry *entry = \rAliCDBManager::Instance()->GetDefaultStorage()->Get("ZDC/Calib/Pedestals",5)\r// Retrieved entry is automatically stored into DBDrain.root !\rTo destroy the AliCDBManager instance and all the active storages:\rAliCDBManager::Instance()->Destroy()\rCreate a local copy of all the alignment objects\r AliCDBManager* man = AliCDBManager::Instance();\r man->SetDefaultStorage(\r "alien://folder=/alice/simulation/2006/PDC06/Residual/CDB/");\r\r man->SetDrain("local://$ALICE_ROOT/CDB");\r AliCDBStorage* sto = man->GetDefaultStorage();\r sto->GetAll("*",0);\r\r // All the objects are stored in $ALICE_ROOT/CDB !\rThe Event Tag System\rThe event tag system [\ 2] is designed to provide fast pre-selection of events with the characteristics desired. This task will be performed, first of all, by imposing event selection criteria within the analysis code and then by interacting with software that is designed to provide a file-transparent event access for analysis. The latter is an evolution of the procedure that has already been implemented by the STAR [\ 2] collaboration. \rIn the next sections, we will first describe the analysis scheme using the event tag system. Then, we will continue by presenting in detail the existing event tag prototype. Furthermore, a separate section is dedicated to the description of the two ways to create the tag files and their integration in the whole framework [\13 PAGEREF _RefE2 \h \ 1\14178\15].\rThe Analysis Scheme\rALICE collaboration intends to use a system that will reduce time and computing resources needed to perform an analysis by providing to the analysis code just the events of interest as they are defined by the users' selection criteria. \13 REF _Ref31545596 \h \ 1\14Figure 12\15 gives a schematic view of the whole analysis architecture.\r\ 1\rFigure \13 SEQ "Figure" \*Arabic \1412\15: The selected analysis scheme using the event tag system\rBefore describing the architecture, let us first define a few terms that are listed in this figure:\rUser/Administrator: A typical ALICE user, or even the administrator of the system, who wants to create tag files for all or a few ESDs [\13 PAGEREF _RefE2 \h \ 1\14178\15] of a run.\rIndex Builder: A code with Grid Collector [\ 2, \ 2] functionality that allows the creation of compressed bitmap indices from the attributes listed in the tag files. This functionality will provide an even faster pre-selection.\rSelector: The user's analysis code that derives from the TSelector class of ROOT [\ 2].\rThe whole procedure can be summarized as follows: The offline framework will create the tag files, which will hold information about each ESD file (top left box of \13 REF _Ref31545596 \h \ 1\14Figure 12\15) as a final step of the whole reconstruction chain. The creation of the tag files is also foreseen to be performed by each user in a post-process that will be described in the following sections. These tag files are ROOT files containing trees of tag objects. Then, following the procedure flow as shown in \13 REF _Ref31545596 \h \ 1\14Figure 12\15, the indexing algorithm of the Grid Collector, the so-called Index Builder, will take the produced tag files and create the compressed bitmap indices. In parallel, the user will submit a job with some selection criteria relevant to the corresponding analysis he/she is performing. These selection criteria will be used in order to query the produced compressed indices (or as it is done at the moment, the query will be on the tags themselves\ 5) and the output of the whole procedure will be a list of TEventList objects grouped by GUID, which is the file's unique identifier in the file catalogue, as it is shown in the middle box of \13 REF _Ref31545596 \h \ 1\14Figure 12\15. This output will be forwarded to the servers that will interact with the file catalogue in order to retrieve the physical file for each GUID (left part of \13 REF _Ref31545596 \h \ 1\14Figure 12\15). The final result will be passed to a selector [\13 PAGEREF _RefE53 \h \ 1\14179\15] that will process the list of the events that fulfil the imposed selection criteria and merge the output into a single object, whether this is a histogram, a tree or any ROOT object. \rThe whole implementation is based on the existence of an event tag system that will allow the user to create the tags for each file. This event tag system has been used inside the AliRoot framework [\ 2] since June 2005. In the next section we will describe this system in detail. \rThe Event Tag System\rThe event tag system has been built with the motivation to provide a summary of the most useful physics information that describe each ESD to the user. It consists of four levels of information [\ 2] as explained in \13 REF _Ref31548293 \h \ 1\14Figure 13\15: \rRun Level: Fields that describe the run conditions and configurations which are retrieved from the Detector Control System (DCS), the Data Acquisition system (DAQ) and the offline framework.\rLHC Level: Fields that describe the LHC condition per ALICE run which are retrieved from the DCS.\rDetector Level: Fields that describe the detector configuration per ALICE run which are retrieved from the Experiment Control system (ECS).\rEvent Level: Fields that describe each event - mainly physics related information, retrieved from offline and the Grid file catalogue.\r\ 1\rFigure \13 SEQ "Figure" \*Arabic \1413\15: Sources of information for different levels of the event tag system\rThe corresponding classes that form this system have already been included in AliRoot's framework under the STEER module. The output tag files will be ROOT files having a tree structure [\13 PAGEREF _RefE55 \h \ 1\14179\15].\rRun tags: The class that deals with the run tag fields is called AliRunTag. One AliRunTag object is associated with each file.\rLHC tags: The class that deals with the LHC tag fields is called AliLHCTag. One AliLHCTag object is associated with each file.\rDetector tags: The class that deals with the detector tag fields is called AliDetectorTag. Information concerning the detector configuration per ALICE run will be described in the ECS database. One AliDetectorTag object is associated with each file.\rEvent tags: The class that handles the event tag fields is called AliEventTag. The values of these fields, as mentioned before, will be mainly retrieved from the ESDs, although there are some fields that will come from the Grid file catalogue. The number of AliEventTag objects that are associated to each file is equal to the number of events that are stored inside the initial ESD file.\rThe Creation of the Tag-Files\rThe creation of the tag-files will be the first step of the whole procedure. Two different scenarios were considered:\rOn-the-fly-creation: The creation of the tag file comes as a last step of the reconstruction procedure.\rPost-creation: After the ESDs have been transferred to the ALICE file catalogue [\13 PAGEREF _RefE9 \h \ 1\14178\15], every user has the possibility to run the creation of tag-files as post-process and create his/her own tag-files.\rThe on the fly creation scenario\rAs mentioned before, the on-the-fly creation of tag-files is implemented in such a way that the tags are filled as last step of the reconstruction chain. This process is managed inside the AliReconstruction class. Thus, exactly after the creation of the ESD, the file is passed as an argument to AliReconstruction::CreateTags. Inside this method, empty AliRunTag and AliEventTag objects are created. The next step is to loop over the events listed in the ESD file, and finally fill the run and event level information. The naming convention for the output tag file is: RunRunId.EventFirstEventId_LastEventId.ESD.tag.root [\13 PAGEREF _RefE55 \h \ 1\14179\15]. \rThe post-creation scenario\rThe post-creation procedure provides the possibility to create and store the tag-files at any time [\13 PAGEREF _RefE55 \h \ 1\14179\15]. The post-creation of the tag-files implies the following steps:\rThe reconstruction code finishes and several ESD files are created. \rThese files are then stored in the ALICE file catalogue [\13 PAGEREF _RefF0 \h \ 1\14Error: Reference source not found\15]. \rThen, the administrator, or any user in the course of her/his private analysis, can loop over the produced ESDs and create the corresponding tag-files.\rThese files can either be stored locally or in the file catalogue [\13 PAGEREF _RefF0 \h \ 1\14Error: Reference source not found\15].\rAs a final step, a user can choose to create a single merged tag file from all the previous ones.\rWhat a user has to do in order to create the tag-files using this procedure, depends on the location of the input AliESDs.root files. Detailed instructions on how to create tag-files for each separate case will be given in the following sections. In general, a user has to perform the following steps:\rProvide information about the location of the AliESDs.root files:\rResult of a query to the file catalogue (TGridResult [\ 2])\rGrid stored ESDs\rAn upper level local directory\rLocally stored ESDs or even a text file\rCERN Analysis Facility (CAF) stored ESDs [\ 2].\rLoop over the entries of the given input (TGridResult, local path, text file) and create the tag file for each entry.\rEither store the files locally or in the Grid's file catalogue\rMerge the tag-files into one file and store it accordingly (locally or in the file catalogue) [\ 2].\r\13 REF _Ref31562413 \h \ 1\14Figure 14\15 provides a schematic view of these functionalities.\r\ 1\rFigure \13 SEQ "Figure" \*Arabic \1414\15: A schematic view of the architecture of the post-creation of tag-files\rThe class that addresses this procedure is AliTagCreator. The main methods of the class and their corresponding functionalities are described in the following lines:\rSetStorage allows the user to define the place where the tag-files will be stored.\rSetSE allows to define the desired storage element on the Grid.\rSetGridPath allows the user to define the Grid path under which the files will be stored. Per default, the tag-files will be stored in the home directory of the user in the file catalogue.\rReadGridCollection is used when creating tag-files from ESDs that are stored in the file catalogue. It loops over the corresponding entries and calls AliTagCreator::CreateTags in order to create the tag-files that will be stored accordingly.\rReadCAFCollection is used when creating tag-files from ESDs that are stored in the CERN Analysis Facility (CAF) [\13 PAGEREF _RefE57 \h \ 1\14179\15]. \rReadLocalCollection is used when creating tag-files from ESDs that are stored locally. \rMergeTags chains all the tags, locally or stored or in the Grid, and merges them by creating a single tag file named RunRunId.Merge.ESD.tag.root. This file is then stored either locally or in the Grid according to the value set in the SetStorage method.\rUsage of AliRoot classes\rThe following lines intend to give an example on how to use the AliTagCreator class in order to create tags. Additional information can be found in [\13 PAGEREF _RefE55 \h \ 1\14179\15]. There are three different cases depending on the location where the AliESDs.root files are stored:\rLocally stored AliESDs.root files;\rCAF stored AliESDs.root files;\rGrid stored AliESDs.root files.\rWe will address the three different cases separately.\rLocally stored AliESDs.root\rWe assume that for debugging or source code validation reasons, a user stored a few AliESDs.root files locally under $HOME/PDC06/pp. One level down, the directory structure can be of the form:\rxxx/AliESDs.root\ryyy/AliESDs.root\rzzz/AliESDs.root\rwhere xxx is the directory which can can sensibly be named Run1, Run2 etc. or even simply consists of the run number. In order to create the tag-files, we need to create an empty AliTagCreator object. The next step is to define whether the produced tags will be stored locally or on the Grid. If the second option is chosen, the user must define the SE and the corresponding Grid path where the tag-files will be stored. If the first option is chosen, the files will be stored locally in the working directory. Finally, the call of AliTagCreator::ReadLocalCollection allows the user to query the local file system and create the tag-files.\r//create an AliTagCreator object\r AliTagCreator *t = new AliTagCreator(); \r //Store the tag-files locally\r t->SetStorage(0);\r //Query the file system, create the tags and store them\r t->ReadLocalCollection("/home/<username>/PDC06/pp");\r //Merge the tags and store the merged file\r t->MergeTags();\rAliESDs.root on the CAF\rWhen ESD files are to be stored on the CAF, we have to provide a text file that contains information about the location of the files in the storage element of the system [\13 PAGEREF _RefE58 \h \ 1\14179\15, \13 PAGEREF _RefE57 \h \ 1\14179\15]. We now assume that this input file is called ESD.txt and is located in the working directory, indicate the steps that one has to follow:\r//create an AliTagCreator object\r AliTagCreator *t = new AliTagCreator(); \r //Store the tag-files in AliEn's file catalog\r t->SetStorage(0);\r //Read the entries of the file, create the tags and store them\r t->ReadCAFCollection("ESD.txt");\r //Merge the tags and store the merged file\r t->MergeTags();\rAliESDs.root on the GRID\rWhen ESD files are stored in the file catalogue, the first thing a user needs to have is a ROOT version compiled with AliEn support. Detailed information on how to do this can be found in [\13 PAGEREF _RefE58 \h \ 1\14179\15]. Then, we need to invoke the AliEn API services [\13 PAGEREF _RefE58 \h \ 1\14179\15] and passing a query to the file catalogue (TGridResult). The following lines give an example of the whole procedure:\r//connect to AliEn's API services\r TGrid::Connect("alien://","<username>"); \r //create an AliTagCreator object\r AliTagCreator *t = new AliTagCreator(); \r //Query the file catalogue and get a TGridResult\r TGridResult* result = \r gGrid->Query("/alice/*",\r "AliESDs.root","","");\r //Store the tag-files in AliEn's file catalog\r t->SetStorage(1);\r //Define the SE where the tag-files will be stored\r t->SetSE("ALICE::CERN::se01");\r //Define the Grid's path where the tag-files will be stored\r t->SetGridPath("PDC06/Tags");\r //Read the TGridResult, create the tags and store them\r t->ReadGridCollection(result);\r //Merge the tags and store the merged file\r t->MergeTags();\rRun and File Metadata for the ALICE File Catalogue\rIntroduction\rIn order to characterize physics data it is useful to assign metadata to different levels of data abstraction. For data produced and used in the ALICE experiment a three layered structure will be implemented:\rRun-level metadata,\rFile-level metadata, and\rEvent-level metadata.\rThis approach minimizes information duplication and takes into account the actual structure of the data in the ALICE File Catalogue.\rSince the event-level metadata is fully covered by the so called 'event tags' (stored in the ESDtags.root files), it will not be discussed in this document. There is a mechanism in place to efficiently select physics data based on run- and file-level conditions and then make a sub-selection on the event level, utilizing the event level metadata. This pre-selection on run- and file-level information is not necessary, but can speed up the analysis process.\rThis document is organized as follows: First we will discuss the path and file name specifications. The list of files/file types to be incorporated in the file catalogue will be discussed. Finally, we will list the meta tags to be filled for a given run (or file).\rPath name specification\rThe run and file metadata will be stored in the ALICE File Catalogue. A directory structure within this database will ensure minimization of metadata duplication. For example, all files written during one specific physics-run do not need to be tagged one by one with a 'run' condition, since all files belonging to this physics run will be tagged with the same information at the run directory level. It is advantageous to organize all these files in a directory tree, which avoids additional tagging of all files. Since this tree structure will be different to that of CASTOR, a mechanism is provided to make sure that the files once created by DAQ are later ‘tagged’ with the proper information encoded in the directory tree itself.\rThe CASTOR tree structure will look like \r/castor/‹Year›/‹Month›/‹Day›/‹Hour›/.\rThe CASTOR file names will be of fixed width, containing information about the year (two digits: YY), the run-number (9 digits, zero padded: RRRRRRRRR), the host-identifier of the GDC (three digits, zero padded: NNN), and a sequential file-count (S), ending up in names of the following structure:\rYYRRRRRRRRRNNN.S.raw or YYRRRRRRRRRNNN.S.root.\rThis is different to what we discussed before where the CASTOR system and the file catalogue had the same directory structure, and getting the ‘hidden’ information from the CASTOR path name was easy.\rThe path name(s) where all ALICE files will be registered in the ALICE File Catalogue will have the following structure:\r/data/‹Year›/‹AcceleratorPeriod›/‹RunNumber›/ for real data,\r/sim/‹Year›/‹ProductionType›/‹RunNumber›/ for simulated data,\rwhere ‹Year›, ‹AcceleratorPeriod›, and ‹RunNumber› contain values like 2007, LHC7a, and 123456789 (nine digits, zero padded). ‹ProductionType› gives information about the specific simulation which was run, which includes the ‘level of conditions’ applied: Ideal, Full, Residual. Therefore one possibility example for a name for‹ProductionType› would be PDC06_Residual. The subdirectory structure provides the place for different files from the specific runs and will be called\rraw/ for raw data,\rreco/‹PassX›/cond/ for links to calibration and condition files,\rreco/‹PassX›/ESD/ for ESD and corresponding ESD tag-files,\rreco/‹PassX›/AOD/ for AOD files.\rThe list of these subdirectories might be extended on a later stage if necessary.\r‹PassX› will specify a certain reconstruction pass (encoded in a production tag) of the raw data in the same parent directory. For each raw data file the output files of several production passes might be present in the reco/ directory, to be distinguished from each other by a different production tag ‹PassX›.\rThe cond/ directory will contain a data set (in the form of an xml-file) which links back to the actual condition data base (CDB) files. The CDB files themselves will be stored in a three layered directory tree:\r‹Detector›/‹Type›/‹Subtype›, \rwhere ‹Detector› will be something like TPC, TRD, PHOS, …, and ‹Type› will specify the type of condition data, like Calib or Align. ‹Subtype› can assume generic strings like Data, Pedestals, or Map. These conditions may be stable on a longer time scale than a single run. In order to avoid duplication or replication of these files (which can be quite large in case of certain mapping files) for each run, they will be put a few levels higher, namely in the \r/data/‹Year›/‹AcceleratorPeriod›/CDB/\rdirectory. The name of the actual calibration files (stored in the subdirectories ‹Detector›/‹Type›/‹Subtype› mentioned above) is chosen in a way to make a successful mapping between the correct calibration files for each run:\rRun‹XXX›_‹YYY›_v‹ZZ›.root,\rwhere ‹XXX› and ‹YYY› is the first and last run number for which this file applies, and ‹ZZ› is the calibration version number.\rFile name specification\rThe file names of the stored ALICE data will be kept simple, and are considered unique for the current subdirectory, for example\r‹NNN›.‹S›.AliESDs.root for ESD files,\rwhere ‹NNN›.‹S› is the identifier of the corresponding raw data file (‹NNN›.‹S›.raw/‹NNN›.‹S›.root, also called raw data-chunk). Therefore different subdirectories (for example the reco/‹PassX›/ESD/ directories for different ‹RunNumber› parent directories) may contain files with the same name but with different content. Nevertheless, the GUID scheme (and the directory structure) makes sure that these files can be distinguished.\rTo make local copies of files without creating the full directory structure locally (e.g. for local calibration runs over a certain set of files), a macro was developed to stage the requested data files either locally or to another ROOT supported storage area. Using alienstagelocal.C will copy the files and rename them to more meaningful names, which allows to distinguish them by their new filenames.\rEach PbPb event was expected to have a raw data size of 12.5 Mb on average. The file size limit of 2 Gb restricts the number of events to 160 per raw data file. Since neither the ALTRO compression nor the Hough encoding will be used for now, the event size will actually be a factor of 4 (2×2) larger, which means 40 events per raw data file (or 50 Mb/event). A central event currently takes about 50 min to be processed, while a typical minimum bias event takes 10 min on average. This means that it will take 6:40 h to reconstruct all the events of one raw data file. It is therefore not necessary to group several raw data files together to produce ‘larger’ ESDs. As result, each AliESDs.root file will correspond to one exactly raw file.\rFor pp events the expected size of 1 Mb/event is currently exceeded by a factor of about 16 (16 Mb/event). One raw data file with a size limit of 2 Gb will therefore contain 125 events. Since it takes about 1 h to reconstruct 100 pp events, one raw data file will be processed in about 1:15 h.\rA typical data-taking run with a length of 4 h and a data rate of 100 events/s will generate 360k PbPb (pp) events in 9000 (720 for pp) raw data files. The corresponding ESD/ directory will therefore contain 9000(720)×n files. It is important to keep the number n of output files small, by storing all necessary information in as few files as possible.\rFiles linked to from the ALICE File Catalogue?\r‘All’ available ALICE files will be handled (meaning registered and linked to) by the file catalogue. In particular, this includes the following files/file types: \rraw data files,\rAliESDs.root files,\rAliESDfriends.root files,\rESDtags.root files, and\rAliAOD.root files.\rFor different files/file types a different set of metadata tags will be added which contains useful information of/for that specific file/file type\rMetadata\rIn the following, we will list metadata information that will be stored in the database. Note that an actual query of the file catalogue (using the AliEn command find ‹tagname›:‹cond›) might contain more specific tags/tag names than elaborated here. For example, some of the available information will be directly stored in the directory/path structure (and only there; like the ‹RunNumber› and ‹Year›), whereas other information can be ‘calculated’ from the metadata provided (e.g. the month when the data was taken, by extracting this information from the run start/stop time metadata).\rRun metadata\r\rtag name\adata format/possible values\adata source\a\arun comment\aText\alog book\a\arun type\aphysics, laser, pulser, pedestal, simulation\alog book\a\arun start time\ayyyymmddhhmmss\alog book\a\arun stop time\ayyyymmddhhmmss\alog book\a\arun stop reason\anormal, beam loss, detector failure, …\alog book\a\amagnetic field setting\aFullField, HalfField, ZeroField, ReversedHalfField, ReversedFullField, …\aDCS\a\acollision system\aPbPb, pp, pPb, …\aDCS\a\acollision energy\atext, e.g 5.5TeV\aDCS\a\atrigger class\a\alog book\a\adetectors present in run\abitmap: 0=not included, 1=included\alog book\a\anumber of events in this run\a\alog book\a\arun sanity\aflag bit or bit mask, default 1=OK\amanually\a\a\rfor reconstructed data:\r\rtag name\aData format/possible values\adata source\a\aproduction tag\a\areconstruction\a\aproduction software library version\a\areconstruction\a\acalibration/alignment settings\aideal, residual, full\areconstruction\a\a\rfor simulated data:\r\rtag name\adata format/possible values\adata source\a\agenerator\aHijing, Pythia, …\amanually\a\agenerator version\a\amanually\a\agenerator comments\aText\amanually\a\agenerator parameters\a\amanually\a\atransport \aGeant3, Fluka, …\amanually\a\atransport version\a\amanually\a\atransport comments\aText\amanually\a\atransport parameters\a\amanually\a\aconditions/alignment settings\aideal, residual, full\amanually\a\adetector geometry\a\aManually\a\adetector configuration\a\amanually\a\asimulation comments\aText\amanually\a\a\rAll this information will be accessible through the Config.C file as well. A link to the specific Config.C file used for the production will be provided in each /sim/‹Year›/‹ProductionType›/‹RunNumber›/ directory.\rThe implementation of a ‘simulation log book’ is considered to be useful. It would serve as a summary of the simulation effort and the information needed for the metadata could be read directly from there. \rFile metadata\r\rtag name\adata format/possible values\adata source\a\afile sanity\aflag bit: e.g. online/offline or available/not available; \rdefault 1=online/available\amanual\a\a\rAdditional information about files is available from the file itself, so there is no need to create special metadata.\rAdditional data is needed in order to store the files into the correct directories (see ‘Path name specification’ above). DAQ will take care of that; they will store the files in the proper location right away and – if necessary – create these new directories.\rPopulation of the database\rThe metadata database has to be filled with the values obtained from different sources, indicated by the ‘data source’ descriptor in the tables given above. At the end of each run, a process running within the shuttle program will retrieve the necessary information from the DAQ log book, and the DCS to write them into the database. This will be done in the same way the detectors retrieve their calibration data.\rData safety and backup procedures\rSince the filenames are meaningless by themselves and the directory structure is completely virtual, it is very important to have a backup system or persistency scheme in place. Otherwise, a crash, corruption or loss of the database results in a complete loss of the physics data taken. Even though the logical files would be still available, their content would be completely unknown to the user. \rThe currently implemented backup procedure duplicates the whole ALICE File Catalogue and is considered a sufficient security measure.\rAliEn reference\rWhat's this section about?\rThis section makes you familiar with AliEn user interfaces and enables you to run production and analysis jobs as an ALICE user using the AliEn infrastructure.\rThe first part describes the installation procedure for the application interface client package gapi and the functionality of the command line interface – the AliEn shell aliensh.\rThe second part describes the AliEn GRID interface for the ROOT/AliROOT framework and introduces you to distributed analysis on the basis of examples in batch style using the AliEn infrastructure.\rTo be able to repeat the steps described in this HowTo, you must have a valid GRID certificate and you must be a registered user within the AliEn ALICE VO. Information about that can be found at\r\13 HYPERLINK ""\ 1\14\15\r\ 1\rFigure \13 SEQ "Figure" \*Arabic \1415\15: ALICE Grid infrastructure\rThe client interface API\rAll user interactions with AliEn are handled using a client-server architecture. The API for all client applications is implemented in the library libgapiUI. Every client interface can communicate over a session-based connection with an API server. The API server exports all functions to the client interface, which are bridged from the AliEn PERL-UI via the AlienAS PERL\0\0\0\0\0®ðC\0+\0+\0 \0i\0n\0t\0e\0r\0f\0a\0c\0e\0 \0s\0c\0r\0i\0p\0t\0.\0\r\0T\0o\0 \0u\0s\0e\0 \0t\0h\0e\0 \0s\0h\0a\0r\0e\0d\0 \0l\0i\0b\0r\0a\0r\0y\0 \0i\0n\0 \0a\0 \0C\0+\0+\0 \0p\0r\0o\0g\0r\0a\0m\0,\0 \0j\0u\0s\0t\0 \0a\0d\0d\0 \0t\0o\0 \0t\0h\0e\0 \0s\0o\0u\0r\0c\0e\0 \0t\0h\0e\0 \0l\0i\0n\0e\0 \0b\0e\0l\0o\0w\0 \0a\0n\0d\0 \0l\0i\0n\0k\0 \0w\0i\0t\0h\0 \0-\0l\0g\0a\0p\0i\0U\0I\0.\0 \0\r\0 \0#\0i\0n\0c\0l\0u\0d\0e\0 \0<\0g\0a\0p\0i\0U\0I\0.\0h\0>\0 \0\r\0I\0n\0s\0t\0a\0l\0l\0a\0t\0i\0o\0n\0 \0o\0f\0 \0t\0h\0e\0 \0c\0l\0i\0e\0n\0t\0 \0i\0n\0t\0e\0r\0f\0a\0c\0e\0 \0l\0i\0b\0r\0a\0r\0y\0 \0p\0a\0c\0k\0a\0g\0e\0 \0\13 \0g\0a\0p\0i\0\r\0T\0h\0e\0 \0s\0t\0a\0n\0d\0a\0r\0d\0 \0w\0a\0y\0 \0t\0o\0 \0i\0n\0s\0t\0a\0l\0l\0 \0t\0h\0e\0 \0c\0l\0i\0e\0n\0t\0 \0i\0n\0t\0e\0r\0face package is to use the AliEn installer. The source installation is explained for special use cases - you might skip that section.\rInstallation via the AliEn installer\rDownload the AliEn installer from:\r\13 HYPERLINK ""\ 1\14\15\rusing a browser or wget. Set the permissions for execution of the installer: chmod ugo+rx alien-installer\rRun the installer:\r./alien-installer\rSelect in the version menu the highest version number with the 'stable' tag and follow the instructions of the installer until you are asked, which packages to install.\rSelect 'gshell - Grid Shell UI' to install the client interface and ROOT to get an AliEn-enabled ROOT package and proceed.\rNote: The installer asks you, where to install the packages. The default location is /opt/alien. To create this directory you need ROOT permissions on that machine. If you are not the administrator of that machine you can only install into your HOME directory or a directory, where you have write permissions. \rRecompilation with your locally installed compiler\rAliEn comes with the CERN standard compiler (currently gcc 3.4.6). If you want to compile applications with your locally installed compiler (different from AliEn's one) and link against the API library, you have to recompile the API library with your compiler.\rTo do so, execute the script \r/opt/alien/api/src/recomplile.gapi [<alien dir> [modules]]\rThe script recompiles the API library and installs over the binary installation in your AliEn installation directory.\rIf you execute the script without arguments, your installation directory configured in the installer will be detected automatically. If you want to recompile a different installation to the one installed by the installer script, you can pass the installation directory as first argument. If you add 'modules' as the 2nd argument, also the PERL and JAVA modules will be rebuilt. To compile the modules you need to install also the “client” package using the alien installer.\rAfter having recompiled successfully, you should set the variable GSHELL_GCC to the path of your gcc executable (e.g. export GSHELL_GCC=`which gcc`).\rSource Installation using AliEnBits\rCreate a directory to install the build sources e.g.:\rmkdir $HOME/alienbits/ ; cd $HOME/alienbits/\rRSync to the development HEAD\rrsync rsync://\ror an older version e.g. v2-6:\rrsync rsync:// .\rLogin in to the CVS with password 'cvs':\rcvs -d login\rUpdate your rsynced directory with the latest CVS:\rcvs -d update -dPA\rRun the configure script and define the installation location:\r./configure--prefix=/opt/alien (or --prefix=$HOME/alien)\rChange to the api-client source directory:\rcd $HOME/alienbits/apps/alien/api\ror the api-client-server source directory:\rcd $HOME/alienbits/apps/alien/apiservice\rStart the build:\rmake\rAliEn Bits will download and compile every package, which is needed to build the API. This includes the compilation of PERL and GLOBUS and will take about 1 hour. Install the build:\rmake install\rNow you have compiled everything that is used by the API package from sources on your computer.\rThe directory structure of the client interface\rThe location of the client interface installation is (if not changed in the Installer or in AliEnBits with the--prefix option) under /opt/alien/api.\rThe structure of the default installation is\r/opt/alien/api/\rbin/\raliensh\rexecutable for the AliEn shell interface\rThere are no modification of the PATH or LD_LIBRARY_PATH environment variables needed to run the shell.\rgbbox\rbusy box executable to run arbitrary commands (we will give a more detailed explanation later)\ralien_<cmd>\rexecutables of alien commands outside aliensh (e.g. from a tcsh)\retc/\rshell scripts defining auto-completion functions for the aliensh \rinclude/\rgapi_attr.h\rC++ include file defining gapi file attributes\rgapi_dir_operations.h\rC++ include file defining POSIX directory operations like opendir, closedir etc.\rgapi_file_operations.h\rC++ include file defining POSIX file operations like open, close, read etc. \rgapi_job_operations.h\rC++ include file defining job interface operations like submit, kill, ps etc. \rgapi_stat.h\rC++ include file defining a POSIX stat interface\rgapiUI.h\rC++ include file containing the interface class for command execution and authentication on a remote apiservice server. This can be considered as the lowest level interface and it is independent of AliEn. Other include files encapsulate more abstract functionalities, which already 'know' the AliEn functionality.\rlib/\rjava/\rjava interface to libgapiUI , if installed\rlibgapiUI.a\\rlibtool library file\\rlink to a current version of the gapiUI shared library\\rlink to a current version of the gapiUI shared library\\ra current version of the gapiUI shared library (the version number might have changed in the meanwhile)\rAdditional files for the server\rsbin/\rgapiserver\rserver executable\rscripts/\\rtest interface script to be used by the gapiservice executable\rAlienAS .pl\rPERL script interfacing from the C++ server to the AliEn native PERL UI\\rstartup script for the API server gapiserver\rUsing the Client Interface - Configuration\rA minimum configuration to use the API client is recommended, although the API client works without PATH or (DY)LD_LIBRARY_PATH modifications. These modifications are for user convenience to avoid typing the full executable paths. If you want to have the executable of aliensh and other packaged executables in your PATH use:\r\rexport ALIEN_ROOT=<alien_path>/alien\rexport GSHELL_ROOT=$ALIEN_PATH/api\rexport GLOBUS_LOCATION=$ALIEN_PATH/globus\rexport PATH=$GSHELL_ROOT/bin:$GLOBUS_LOCATION/bin:$PATH\rexport LD_LIBRARY_PATH=$GSHELL_ROOT/lib:$GLOBUS_LOCATION/lib:$LD_LIBRARY_PATH\r\rConfigure ROOT with:\r--enable-alien --enable-globus \\r--with-alien-incdir=$ALIEN_ROOT/api/include \\r--with-alien-libdir=$ALIEN_ROOT/api/lib \\rUsing the Client Interface - Authentication\rSession tokens are used for performance reasons for authentication and identification to any API server. These tokens are similar to GRID proxy certificates (limited lifetime). Additionally they are modified every time they have been used. Every token represents one well-defined role, which you specify in the initialization of the token. It is issued by the API server and shipped over an SSL connection to the client.\rToken Location\rThe gapi library supports two methods for storing an authentication token:\rStore in memory\rThe token can only be accessed by one application. This method is used e.g. in ROOT and will be explained later.\rStore in a token file. \rThis method is used for the shell implementation aliensh.\rIn the following, we will discuss the handling of file-tokens.\rFile-Tokens\rThe file-token mechanism is implemented in three commands:\ralien-token-init\ralien-token-info\ralien-token-destroy \rNone of the commands touches your current user environment, i.e. they don't modify your PATH or LD_LIBRARY_PATH environment variables.\rSession Token Creation\rSession tokens are stored in the /tmp directory following the naming convention:\r /tmp/gclient_env_${UID}\rFor security reasons, permissions on the token file are set to “-rw-------” for the owner of the token.\rThere are three authentication methods for the API service:\rusing GRID proxy certificates\rusing password authentication (by default disabled on ALICE servers)\rusing AliEn job token\rNote: It is always recommended to use a GRID proxy certificate for authentication. Password Authentication and Job Token are described for completeness.\rToken Creation using a GRID Proxy certificate – alien-token-init \rTo obtain a session token, you need a GRID proxy certificate. For this, you have to execute alien-token-init [role] to contact one of the default API services and to obtain a session token for this service the [role] parameter is optional. If you don't specify a role, your local UNIX account name ($USER) is taken as the role in the session request. You can request the middle ware administrators to allow you to use other roles than your personal identity, e.g. the aliprod role for generic productions.\r\ 1\rThe list of default API services is obtained automatically. The client tries to connect in a well defined way to one of the services available, and to establish a session. The list of available services is centrally configured.\rIf none of the (redundant) services are available, no token will be created.\rInstead of using the automatic server endpoint configuration, you can force a specific API server endpoint and role via the following environment variables:\ralien_API_HOST:\v API server host name e.g.\ralien_API_PORT:\v API server information port, default 9000\ralien_API_USER:\v Role you want to request with your certificate in AliEn\ralien_API_VO:\v Virtual Organization. Presently this parameter is not used.\rThe client itself does not choose the lifetime of session tokens. It depends only on the configuration of the API server you are connecting. alien-token-init has an auto-bootstrap feature:\rThe first time you use this command, it will run the bootstrap procedure, which will inform you about the creation of certain symbolic links etc. \rIf you move or copy a client interface installation to another directory, the installation bootstraps itself (the first time, when you use the alien-token-init command).\rIf you miss some of the required libraries, you will get a hint, how to proceed in such a case.\rReturn Values ($?):\r0 Token successfully obtained\r98 Bootstrap error – no permissions\r99 No Token created – authentication failed\rNote: if you have problems like the ones shown here, verify that you don't have the environment variable X509_CERT_DIR set. In case you have, remove it! ( unset X509_CERT_DIR in bash)\rToken Creation using a password – alien-token-init\rThe procedure to create a token with a password is the same as with GRID proxy certificates and is mentioned only for completeness. In a GRID environment all user authentication is done using proxy certificates. \rIt is possible to configure an API service to authenticate via the PAM modules using login name and password. \r\ 1\rOne has to specify the account name as the [role] parameter to alien-token-init, and you will be prompted to enter a password. The client uses an SSL connection to communicate the password, which is validated on the server side using the PAM library. Afterwards, the password is immediately purged from client memory.\rToken Creation via AliEn Job Token – alien-token-init\rA third authentication method using job tokens exists: if jobs are run via the AliEn TaskQueue, you can obtain a token using the AliEn job token known in the environment of your GRID batch job. In this case you don't need to specify any [role] parameter. The role is mapped automatically according to the AliEn job token found in your job sandbox and mapped on server-side using the AliEn authentication service.\rManual setup of redundant API service endpoints\rIt is possible to overwrite the central configuration and to specify your own list of API service machines via the environment variable alien_API_SERVER_LIST e.g.:\rexport alien_API_SERVER_LIST="host1:9000|host2:9000|host3:9000"\rChecking an API session token – alien-token-info\rSimilar to grid-proxy-info you can execute alien-token-info to get information about your session token.\r\ 1\rHost host name of the API server, where this token is to be used\rPort communication port for remote command execution\rPort2 information port, where the initial authentication methods and protocol versions are negotiated\rUser role associated with this token\rPwd random session password used in the security scheme\rNonce- dynamic symmetric CIPHER key\rSID assigned session ID by the API server\rEncr. Rep. 0 specifies, that all service replies are not encrypted. 1 specifies, that service replies are fully encrypted. \rNote: the default operation mode is to encrypt all client requests, while server responses are not encrypted.\rExpires Local Time, when the token will expire\rAfter the token has expired, the last line of the command will be “Token has expired!”, in case it is valid: “Token is still valid!” If no token can be found, the output will be only: “No token found!”\rReturn Values:\r0 Token is valid\r1 Token is not valid\r2 No Token found\rDestroying an API session token – alien-token-destroy\ralien-token-destroy is used to destroy an existing session token.\rReturn Values:\r0 Token has been removed\r100 No Token to be removed\rSession Environment Files\rThe alien-token-init command produces two other files in the /tmp directory.\r/tmp/gclient_env_$UID\rYou can source this file, if you want to connect automatically from an application to the same API server without specifying host and port name in the code. This will be explained in detail later.\r/tmp/gbbox_$UID_$PID\rThis file keeps all the 'client-states' which are not directly connected to authentication or session variables (which are written to the token file): for the moment only the CWD is stored. This file is bound to the PID of your shell to allow different CWDs in different shells.\rThe “client-state” file is configured automatically in aliensh via the environment variable GBBOX_ENVFILE.\rThe AliEn Shell – aliensh\rThe aliensh provides you with a set of commands to access AliEn GRID computing resources and the AliEn virtual file system. It can be run as an interactive shell as well as for single-command and script-execution.\rThere are three command categories:\rinformative + convenience commands \rVirtual File Catalogue + Data Management Commands\rTaskQueue/Job Management Commands\rShell Invocation\rInteractive Shell Startup/Termination\rThe shell is started invoking the 'aliensh' executable as seen in the following example:\r\ 1\rThe shell advertises the present protocol version used on client-side (aliensh 2.1.10 in this case) and displays the server message of the day (MOTD) after invocation.\r You can execute commands by typing them into the shell and invoking the execution with the ENTER key. In fact, aliensh is a special featured bash shell.\rShell return values are all constructed following common practice:\rReturn Values:\r0 Command has been executed successfully\r!=0 Command has produced “some” error\rMost of the shell commands provide a help text, if executed with a “-h” flag. You can leave the shell with the “exit” command:\r\ 1\rShell Prompt\rThe shell prompt displays in the 1st bracket the VO name, the 2nd is a command counter. The right hand side of the prompt displays the user's CWD. If the shell is invoked the first time, the CWD is set to the user's home directory. Otherwise the CWD is taken from the previous session.\rShell History\rThe shell remembers the history of previously executed commands. These commands are stored in the history file $HOME/.aliensh_history.\rShell Environment Variables\raliensh defines the following environment variables:\ralien_API_HOST\ralien_API_PORT\ralien_API_USER\ralien_API_VO\rMONALISA_NAME\vMona Lisa Site Name e.g. “CERN”\rMONALISA_HOST\vResponsible Host running a MonALISA Server\rAPMON_CONFIG\vResponsible MonALISA Server to receive ApMon UDP packets\rHOME\vYour virtual AliEn home directory\rGBBOX_ENVFILE\vEnvironment file keeping shell state variables (e.g. CWD)\rGSHELL_ROOT\vInstallation path of the API client package\rSingle Command Execution from a user shell\rYou can execute a single command by invoking the shell with the command as the first parameter:\r\ 1\rReturn Values:\r0 Command has been executed successfully\r1 Command has been executed and returned an error\r127 Command does not exist and could not be executed\r255 Protocol/Connection error\rScript File Execution from a user shell\rYou can execute an aliensh script by giving the script filename as the first argument to aliensh. Local script files have to be prefixed with 'file:', otherwise the script is taken from the AliEn file catalog!\r\ 1\rThe final line of the script output 'exit' is produced, when the aliensh terminates and is not part of the user script.\rReturn Values:\r0 Script was executed successfully and returned 0\r1 Script was executed and returned != 0\r127 Script does not exist and could not be executed\r255 Protocol/Connection error\rScript execution inside aliensh “run”\rIf you want to source a script which is stored in the file catalogue or on the local disk, you can use “run <filename>” for a file in the catalogue or “run file:<filename>” for a file on your local disk. The known 'source' command sources only local files.\rBasic aliensh Commands\rwhoami\rPrint the user's role on STDOUT.\rReturn Values:\r0 Successfully executed \r255 Protocol/Connection error\rclear\rClear your terminal window\rpwd\rPrint the cwd on stdout\rReturn Values :\r0 Successfully executed\r255 Protocol/Connection error\rgbbox [-d] [-h] [-f <tag1>,[<tag2>]..]\rBusy Box command, used to execute most of the aliensh commands. Example: “whoami” executes in fact “gbbox whoami”\r\ 1\rThe API protocol returns always four streams in the response:\rSTDOUT\rSTDERR\rResults Structure (Array of Key-Value pairs)\rMisc. Hash (Array of Key-Value pairs for environment variables etc.)\rThe shell prints only the STDOUT and STDERR stream on your terminal. With the “-d” option, you get a dump of all returned streams:\r\ 1\rThe “-o <tag>” option prints only the tag <tag> of the result stream to your terminal. For Boolean functions the return value is returned with the tag name “__result__”:\r\ 1\rThis command is very useful to make a quick check, what a busy box command returns - especially with respect to the API library, where you can easily access directly any tag in the result structure.\rReturn Values:\r0 Command has been executed successfully\r1 Command has been executed and returned an error\r127 Command does not exist and could not be executed\r255 Protocol/Connection error\rcd [<directory>]\rchange to the new working directory <directory>. If the directory is unspecified, you are dropped into your home directory. < directory> can be a relative or absolute PATH name. Like in standard shells 'cd – ' drops you in the previous visited directory – if there is no previous one, into your home directory.\rReturn Values:\r0 Directory has been changed\r1 Directory has not been changed\r255 Protocol/Connection error\rls [-laF|b|m] [<directory>]\rlist the current directory or <directory>, if specified.\r“-l” list in long format\rthe long format uses colour highlighting\vblue plain files\vgreen directories\vred JDL files\r“-a” list also hidden files starting with “.”\r“-b” list in guid format (guid + name)\r“-m” list in md5 format (md5 + name)\r“-F” list directories with a trailing “/”\r“-n” switch off the colour mode for the long format\rNote: if you want to use the ls function in scripts and parse the output, you should always add the -n flag to switch off the colour mode for the '-l' long format.\r\ 1\r\ 1\r\ 1\r\ 1\rReturn Values:\r0 Directory has been listed\r1 Directory has not been listed\r255 Protocol/Connection error\rmkdir [-ps] <directory>\rcreate the directory <directory>\r“-p” recursively creates all needed subdirectories\r“-s” silences the command – no error output\r\ 1\rReturn Values:\r0 Directory has been created\r1 Directory has not been created\r255 Protocol/Connection error\rrmdir [-r] <directory>\rremove the directory <directory> or everything under the directory tree <directory>\r“-r” recursively removes files and directories\r\ 1\rPlease note that, if you use the '-r' flag, you remove only entries from the file catalogue, but not their physical files on a mass storage system. Use ‘erase’ to remove files physically.\rData Management Commands\rcp [-s ][-d] [-v] [-m][-n] [-t <time>] [-f] [-i <infofile>]<src> <dst>\rcopy file(s) from <src> to <dst>: This command always produces a physical copy of a file. It does not only copy an entry in the catalogue to a new name!\r<src> and <dst> have to be given with the following URL-like syntax:\r[alien:|file:] <path|dir>[@<SE>]\rExamples:\r"alien:*.root" specifies all '.root' files in the AliEn CWD\r"alien:/alice/" specifies the “/alice/” directory in AliEn\r"file:/tmp/myfile.root" specifies the local file '/tmp/myfile.root' on your computer\r"myfile.root@ALICE::CERN::Castor2" specifies the file “myfile.root” in the CWD in AliEn in the ALICE::CERN::Castor2 Storage Element\rIf a file has several replicas and you don't specify a source SE, the closest SE specified in the environment variable alien_close_SE is taken. If this is not set, the closest one to your client location is configured automatically.. \rIf you don't specify a target SE, the SE specified in the environment variable alien_close_SE is taken\rIf <src> selects more than one file (e.g. '*.root') <dst> must be a directory – otherwise you will get an error\rIf <src> is a directory (e.g. '/tmp/mydir/ ' ) <dst> must also be a directory – otherwise you will get an error\rOptions:\r“-s” be silent, in case of no error, don’t print anything\r“-d” switch on debug output. This can be useful to find reasons for command failures\r“-v” more verbosity – print summaries about source, destination, size and transfer speed\r“-m” MD5 sum computation/verification\rlocal to grid file : compute the md5 sum of the local file and insert it into the catalogue. This is turned “on” by default for local to grid file copy operations\rgrid to local file : verify the md5 sum of the downloaded file\rlocal to local file : flag ignored\rgrid to grid file : recompute the md5 sum of the source file and enter it in the catalogue with the destination file\r“-n” new version – this option is only active, if the destination is a file in AliEn: if the destination file exists already, the existing destination is moved into the subversions directory (see following example) and labeled with a version number starting with v1.0. If the new version to be copied has the same md5sum, the file is not copied at all!\rExample:\rIf you overwrite a file named “myfile” with the “-n” option, the original “myfile” is moved to “.myfile/v1.0” and “myfile” contains the last copied file. If you repeat this, “myfile is moved to “.myfile/v1.1” etc.\r“-t” <seconds> specifies the maximum waiting time for all API service requests (which are executed within the copy command). The default is 10 seconds. \r“-f” “force” transfer. This sets the first connection timeout to 1 week and the retry timeout (if a connection is interrupted during a transfer) to 60 seconds. The default is 15 s for the first connection timeout and 15 seconds for retries. In interactive shells however, it is more user-friendly, if a copy command times out automatically.\r“-i <file>“ writes an info file locally to <file>, containing the <src> and <dst> parameters used, e.g. you can get the information, from which SE a file has been read, if you didn’t specify it in <src>.\rReturn Values:\r0 File(s) copied successfully or you have just requested the “-h” option for the usage information\r1 Directory has not been listed\r20 access to the source file has been denied\r21 access to the destination file has been denied\r22 timeout while getting an access envelope\r23 you tried to copy multiple files, but the destination was not a directory\r24 could not create the target destination directory when copying multiple files\r25 copy worked, but couldn’t write the info file given by “-i <infofile>“.\r30 the md5 sum of the source file is not “0” and the md5sum of the downloaded file differs from the catalogue entry. \r200 Control-C or SIG Interrupt signal received \r250 an illegal internal operation mode has been set (can happen only in “buggy” installations/setups)\r251 <src> has not been specified\r252 <dst> has not been specified\r253 you miss the xrootd xcp or xrdcp program the first place to search for is in your environment. If it can’t be found there, it tries to find it in your API installation directory or one level (../) higher\r 255 the copy operation itself to/from the SE failed (xrdcp failed)\rExamples:\rcopy local to AliEn file:\r\ 1\rcopy AliEn to local file:\r\ 1\rcopy local to AliEn file with verbose option:\r\ 1\rcopy local to existing AliEn file creating new version:\r\ 1\rcopy AliEn to local file – write copy information file:\r\ 1\rcopy to a specific AliEn storage element:\r\ 1\r\r\rrm [-s] [-f] <file>\rremove the entry <file> from the file catalogue\r“-s” - be silent – don’t print ERROR messages\r“-f” - succeeds even if the file did not exist\rReturn Values:\r0 File has been removed\r1 File could not be removed\r252 <file> parameter missing\r255 Protocol/Connection error\r\ 1\rcat <file>\rprint <file> to STDOUT. <file> has an URL-like syntax and can reference an AliEn or local file:\r“cat file:/tmp/myfile.txt” - print the local file “/tmp/myfile.txt”\r“cat myfile.txt” - print the AliEn file “myfile.txt” from the CWD\r“cat alien:/alice/myfile.txt” - print the AliEn file “/alice/myfile.txt” (the protocol “alien:” is optional from within the shell).\rReturn Values:\r0 File has been printed with “cat”\r1 File could not be printed with “cat”\r246 the local copy of the file could not be removed\r250 an illegal internal operation mode was set (can happen only in “buggy” installations/setups)\r252 <file> parameter missing\r255 Protocol/Connection error\rmore <file>\ruse “more” to print the file <file> on STDOUT. <file> has an URL-like syntax and can reference an AliEn or local file:\r“more file:/tmp/myfile.txt” - print the local file “/tmp/myfile.txt”\r“more myfile.txt” - print the AliEn file “myfile.txt” from the CWD\r“more alien:/alice/myfile.txt” - print the AliEn file “/alice/myfile.txt” (the protocol “alien:” is optional from within the shell).\rReturn Values:\r0 File has been printed with “more”\r1 File could not be printed with “more”\r246 the local copy of the file could not be removed\r250 an illegal internal operation mode was set (can happen only in “buggy” installations/setups)\r252 <file> parameter missing\r255 Protocol/Connection error\rless <file>\rprint the file <file> to STDOUT. <file> has an URL-like syntax and can reference an AliEn or local file:\r“less file:/tmp/myfile.txt” - print the local file “/tmp/myfile.txt”\r“less myfile.txt” - print AliEn file “myfile.txt” in the CWD\r“less alien:/alice/myfile.txt” - print the AliEn file “/alice/myfile.txt” ( the protocol “alien:” is optional from within the shell).\rReturn Values:\r0 File has been printed with “less”\r1 File could not be printed with “less”\r246 the local copy of the file could not be removed\r250 an illegal internal operation mode was set (can happen only in “buggy” installations/setups)\r252 <file> parameter missing\r255 Protocol/Connection error\redit [-c] <file>\redit local or AliEn files using your preferred editor. The file <file> is copied automatically into the /tmp directory, and you work on this local copy. If you close your editor, the file is saved back to the original location, if you have modified it. The default editor is “vi”. You can switch to another editor by setting the environment variable EDITOR:\rfor vim: EDITOR="vim"\rfor emacs: EDITOR="emacs" or "emacs -nw"\rfor xemacs: EDITOR="xemacs" or "xemacs -nw"\rfor pico: EDITOR="pico"\rNote: Change this setting in the local aliensh rc-file $HOME/.alienshrc <file> has an URL-like syntax and can reference an AliEn or a local file:\r“edit file:/tmp/myfile.txt” - edit the local file “/tmp/myfile.txt”\r“edit myfile.txt” - edit the AliEn file “myfile.txt” in the CWD\r“edit alien:/alice/myfile.txt” - edit the AliEn file “/alice/myfile.txt” (the protocol “alien:” is optional from within the shell).\r“edit alien:/alice/myfile.txt@ALICE::CERN::Tmp” - edit the AliEn file “/alice/myfile.txt”. The file is read preferably from the SE “ALICE::CERN::Tmp”, if this is not possible from another “closest” SE. The file will be written back into “ALICE::CERN::Tmp”.\rAliEn files are per default written back into the same storage element, unless you specify a different one by appending “@<SE-Name>“ to <file>.\rEvery modified AliEn file is saved as a new version in the file catalog. See the “cp -v” command for information about versioning.\rOf course, you can only edit files that exist. If you want to edit a new empty file use:\r“-c”: create a new empty file and save it to <file>. If <file> is a local file and already exists, it is overwritten with the newly edited file. If <file> is an AliEn file and exists, it is renamed to a different version and your modified version is stored as <file>\r“-h”: print the usage information for this command\rReturn Values:\r0 File has been edited and rewritten, or the “-h” flag \r1 File could not be written back. You get information about your temporary file to rescue it by hand.\r246 the local copy of the file could not be removed\r250 an illegal internal operation mode was set (can happen only in “buggy” installations/setups)\r252 <file> parameter missing\rerase <file>\rremove physically all replicas of <file> from storage elements and the catalogue entry\rReturn Values:\r0 File has been erased\r1 File could not be erased\r255 Protocol/Connection error\r\ 1\rpurge <file>|<directory>\rwith <file> parameter: removes all previous versions of <file> except the latest\rwith <directory> parameter: same as above for each file contained in <directory>\rReturn Values:\r0 File has been purged | directory has been purged\r1 File or Directory could not be purged – see output message for details\r255 Protocol/Connection error\rwhereis [-l] <file>\rlist all replicas of file <file>. It includes the GUID, the TURL and the SE name.\r“-l” list only the names of the SEs and the GUID\r“-h” print the usage information\rReturn Values:\r0 Command successful\r1 Command failed\r255 Protocol/Connection error\rExample: \rlocate a file on SEs only:\r\ 1\rlocate a files GUIDs/TURLs and SEs:\r\ 1\radvanced usage: use the busy box command, to print only one output field of the command: \r\ 1\rmirror <lfn> <se>\rIf you want to replicate files from one SE to another, you can use the mirror command. <lfn> is the file you want to replicate, <se> is the target storage element.\rReturn Values:\r0 Command successful\r1 Command failed\r255 Protocol/Connection error\rdf [<SE-name>]\rReport the disk space usage of the default SE or <SE-name>\rReturn Values:\r1 in any case\rfind [-<flags>] <path> <fileName|pattern> [[<tagname>:<condition>] [ [and|or] [<tagname>:<condition>]]*]\rhelps to list catalogue entries according to certain criteria. The search is always tree-oriented, following the hierarchical structure of the file catalogue (like the UNIX find command).\rThe simplest find syntax is :\r find /alice/ *.root\r\ 1\rUse the ‘%’ or the ‘*’ character as a wildcard. You can switch on the summary information with the '-v' flag:\r\ 1\rDepending on your needs you can also select, which information you want to print per entry using the '-p <field1>[,<field2>[,<field2>...]]' option. E. g. :\r\ 1\rAvailable fields are (only the 'interesting' ones are commented): \r\rseStringlist\a\a\aaclId\a\a\alfn\alogical AliEn file name\a\adir\a\a\asize\asize of the file in bytes\a\agowner\aGroup\a\aguid\athe GUID of the file\a\aowner\aowner (role)\a\actime\a\a\areplicated\a\a\aentryId\a\a\aexpiretime\a\a\aselist\a\a\atype\a\a\amd5\athe MD5 sum of that file\a\aperm\athe file permissions in text format\a\aIf you add the '-r' flag, you can print additional fields describing the location of that file:\rlongitude\rlatitude\rlocation\rmsd (mass storage domain)\rdomain\rIn combination with ROOT the flag '-x <collection name>' is useful since it prints the result as XML collection, which can be read directly by ROOT. \r\rYou can easily store this in a local file 'find ..... > /tmp/collection.xml'.\rAdditional metadata queries can be added using the tagname:condition syntax. Several metadata queries can be concatenated with the logical ‘or’ and ‘and’ syntax. You should always single-quote the metadata conditions, because < > are interpreted by the shell as pipes. The following example shows a metadata query on the tag database:\r\rThe same query as above, but this time with the XML output format. All metadata information is returned as part of the XML output file:\r\rMetadata Commands\rSchema Definition\rTo tag metadata in AliEn, you first need to create a schema file defining the metadata tag names and variable types. You find existing tag schema files for your VO in the AliEn FC in the directory “/<VO>/tags/” or in your user home directory “~/tags”.\rThe syntax used to describe a tag schema is:\r<tagname 1> <variable type 1> [,<tarname 2> <variable type 2> …] \r<variable type> are SQL types and can be e.g.\rchar(5)\rint(4)\rfloat\rtext\rIf you want to create your own tag schema you will have to copy a schema file into the tags directory in your home directory. You might need to create it, if it does not exist.\rTo tag your files e.g. with some description text, create a tag file ~/tags/description containing:\r\raddTag [-d] <directory> <tag name>\radd the schema <tag name> to <directory>. The schema file called <tag name> must be saved in the AliEn FC under ‘/<VO>/tags’ or in your home directory ‘~/tags’. All subdirectories will inherit the same schema. By default, all directories using the same schema store the metadata information in the same table. The ‘-d’ flag creates a separate database table for the metadata of schema <tag name>. \rReturn Values:\r0 Command successful\r1 Command failed\r255 Protocol/Connection error\rshowTags <directory>\rShows all defined tags for <directory>.\rReturn Values:\rCommand successful\rCommand failed\rProtocol/Connection error\rremoveTag <directory> <tag name>\rRemoves the tag <tag name> from <directory>. All defined metadata will be dropped.\rReturn Values:\r0 Command successful\r1 Command failed \r255 Protocol/Connection error\r\raddTagValue <file> <tag name> <variable>=<value> [<variable>=<value> … ]\rSets the metadata <variable> to <value> in schema <tag name> for <file>. <tag name> must be an existing metadata schema defined for a parent directory of <file>, otherwise the command will return an error.\r\rReturn Values:\r0 Command successful\r1 Command failed\r255 Protocol/Connection error\rshowTagValue <file> <tag name>\rShows the tag values for <file> from metadata schema <tag name>.\rReturn Values:\r0 Command successful\r1 Command failed\r255 Protocol/Connection error\rremoveTagValue <file> <tag name>\rRemoves a tag value :\r\rFile Catalogue Trigger\rAliEn allows you to trigger actions on insert, delete or update events for a certain directory tree. A trigger action is a script registered in the file catalogue under /<VO>/triggers or ~/triggers, which receives the full logical file name of the modified entry as a first argument when invoked.\raddTrigger <directory> <trigger file name> [<action>]\rAdds a trigger <trigger file name> to <directory>. <action> can be insert, delete or update (default is insert).\rshowTrigger <directory>\rShows the defined trigger for <directory>.\rremoveTrigger <directory> [<action>]\rRemoves the trigger with <action> from <directory>.\rJob Management Commands\rtop [-status <status>] [-user <user>] [-host <exechost>] [-command <commandName>] [-id <queueId>] [-split <origJobId>] [-all] [-all_status]\rprint job status information from the AliEn task queue\r\r-status <status>\aprint only jobs with <status>\a\a-user <user>\aprint only jobs from <user>\a\a-host <exechost>\aprint only hosts on <exechost>\a\a-command <command>\aprint only tasks executing <command>\a\a-id <queueId>\aprint only task <queueId>\a\a-split <masterJobId>\aprint only tasks belonging to <masterJobId>\a\a-all\aprint jobs of all users\a\a-all_status\aprint jobs in any state\a\aReturn Values:\ranything not 255 Command has been executed\r255 Protocol/Connection error\rps [.....] \rsimilar functionality to 'top': report process states\rIf the environment variable alien_NOCOLOUR_TERMINAL is defined, all output will be black&white. This is useful, if you want to pipe or parse the output directly in the shell.\rThe following options are defined (parameters like <list> are comma separated names or even just a single name):\r\r-F {l} \al = long (output format)\a\a-f <flags/status>\ae.g. -f DONE lists all jobs in status ‘done’\a\a-u <userlist>\alist jobs of the users from <userlist>. -u % selects jobs from ALL users!\a\a-s <sitelist>\alists jobs which are or were running in <sitelist>\a\a -n <nodelist>\alists jobs which are or were running in <nodelist>\a\a-m <masterjoblist>\alist all sub-jobs which belong to one of the master jobs in <masterjoblist>\a\a-o <sortkey>\aexecHost, queueId, maxrsize, cputime, ncpu, executable, user, sent, split, cost, cpufamily, cpu, rsize, name, spyurl, commandArg, runtime, mem, finished, masterjob, status, splitting, cpuspeed, node, error, current, received, validate, command, splitmode, merging, submitHost, vsize, runtimes, path, jdl, procinfotime, maxvsize, site, started, expires\a\a-j <jobidlist>\alist all jobs with from <jobidlist>.\a\a-l <query-limit>\aset the maximum number of jobs to query. For a non-privileged user the maximum is 2000 by default\a\a-q <sql query>\aif you are familiar with the table structure of the AliEn task queue, you can specify your own SQL database query using the keys mentioned above. If you need a specific query, ask one of the developers for help.\a\a-X\aactive jobs in extended format\a\a-A \aselect all your owned jobs in any state\a\a-W \aselect all YOUR jobs which are waiting for execution\a\a-E \aselect all YOUR jobs which are in error state\a\a-a \aselect jobs of ALL users\a\a-jdl <jobid>\adisplay the job jdl of <jobid>.\a\a-trace <jobid> [trace-tag[,trace-tag]] \adisplay the job trace information. If tags are specified, the trace is only displayed for these tags per default, all proc tags are disabled. to see the output with all available trace tags, use ps -trace <jobid> all.\rthe available tags are:\a\a\aproc\aresource information\a\a\astate\ajob state changes\a\a\aerror\aError statements\a\a\atrace\ajob actions (downloads etc.)\a\a\aall\aoutput with all previous tags\a\aReturn Values:\r0 Command has been executed\r255 Wrong command parameters specified\r\ 1\rsubmit [-h] <jdl-file>\rsubmits the jdl file <jdl-file> to the AliEn task queue.\rlocal jdl files are referenced using “file:<local-file-name>”\rAliEn files are referenced by “<alien-file-name>” or “alien:<alien-file-name>”.\r\ 1\rReturn Values:\r0 Submission successful\r255 Submission failed\rWarning: local JDL files can only be submitted, if they don't exceed a certain size. In case you reference thousands of input data files, it is safer to register this JDL file in the file catalogue and submit it from there. The proper way to access many input data files is to use the InputDataCollection tag and to register an XML input data collection in AliEn as explained later on. \r\ 1\rkill <job-id>\rkills a job from the AliEn task queue\r<job-id> can be a 'single' or a 'master' job. Killing a 'master' automatically kills all it's sub jobs.\rto be able to kill a job, you must have the same user role like the submitter of that job or be a privileged user.\rReturn Values:\r0 Job successfully killed\r255 Job couldn't be killed\r\rqueue [ info, list, priority .....]\rprovides TaskQueue state information and other parameters to the user. It accepts three subcommands:\rqueue info [<Queue Name>]\rprints for one <Queue Name> or all sites the status information and counters of individual job states. Here you can see e.g. if the site, where you want to run your GRID job is currently blocked, closed or in any error state.\rqueue list [<Queue Name>]\rprints for <Queue Name> or all sites status information, configuration and load parameters.\r\ 1\rqueue priority \rqueue priority list [<user-name>]\rprints for one ( <user-name>) or all users information about their priorities. Priorities are specified by a nominal and maximal value of parallel jobs. Currently, the maximum is not enforced. Initially, every new user has both parameters set to 20. If the system is not under load, you will be able to run more than 20 jobs in parallel. If the system is loaded in such a way, that all nominal jobs fill up exactly the capacity, you can run exactly 20 jobs in parallel. If the system is running above the nominal load, you will run less than 20 jobs in parallel, according to a fair share algorithm. If you need a higher job quota, contact the ALICE VO administrators.\r\ 1\rqueue priority jobs [<user-name>|%] [<max jobs>]\rprints the job ranking for all jobs or <user-name>. <max jobs> parameter limits the list, if there are too many jobs in the queue. If you want to see the ranking of the ten jobs which are to be executed next, if picked up by an agent, do\rqueue priority jobs % 10\rspy <job id> workdir|nodeinfo|<sandbox file name>\rallows to inspect running jobs. The first argument is the job ID you want to inspect. The second argument can be:\rWorkdir lists the contents of the working directory\r\bnodeinfo\rdisplay information about the worker node where the job <job ID> is running\r<sandbox file name>\rcan be e.g. stdout, stderr or any other file name existing in the sandbox. Be careful not to spy big files and be aware that the contents is printed as text onto your terminal.\r\ 1\r\rregisterOutput <job id>\rFailing jobs do not automatically register their output in the file catalogue. If you want to see (log) files of failed jobs you can use this command. \rOutput files are registered in your current working directory. It is convenient, to create an empty directory and change that before executing this command.\rPackage Management\rpackages\rThis command lists the available packages defined in the package management system. \rReturn Values:\r0 Command has been executed\r255 Protocol/Connection error\r\ 1\rStructure and definition of Packages\rYou select a certain package by specifying in your job description:\rPackages={ <package name 1> [, <package name 2> ...] };\rPackages are divided into user and VO packages. The VO packages can be found in the file catalogue under /alice/packages , while user packages are stored in the home directory of users under $HOME/packages.\rCreate an AliEn package from a standard ROOT CVS source\rIf you want to publish your self-compiled ROOT version:\rcd root/\rmake dist \rcd ../\rcreate a file “.alienEnvironment”\runzip the dist file created by ROOT: e.g. unzip root*.tgz\radd the .alienEnvironment file: tar rvf root*.tar .alienEnvironemnt\rzip the ROOT archive file: gzip root*.tar\rpublish the new package in AliEn as your private version 5.10.0 :\rmkdir -p $HOME/packages/ROOT/5.10.0/\rcp file:<root-tgz> $HOME/packages/ROOT/5.10.0/`uname`-`uname -p`\r\r#################### PackMan Setup File for ROOT ####################\recho "*** PackMan Setup Start ***"\rexport ROOTSYS=$1/root\recho "****************************************************************"\recho "ROOTSYS set to $ROOTSYS"\rexport PATH=$ROOTSYS/bin:$PATH\recho "*********************************************************************"\recho "PATH set to $PATH"\rexport LD_LIBRARY_PATH=$ROOTSYS/lib:$LD_LIBRARY_PATH\recho "****************************************************************"\recho "LD_LIBRARY_PATH set to $LD_LIBRARY_PATH"\recho "****************************************************************"\recho "*** PackMan Setup End ***"\r# The following two lines MUST be there!\rshift\r$*\rDefine Dependencies\rTo define dependencies, you first need to add the metadata schema PackageDef to your software directory ~/packages. Then, you add another software package as a dependency by adding the tag variable ‘dependencies’ to your package version directory. Several packages can be referenced by a comma separated list.\rExample:\r\raddTag PackageDef ~/packages/ROOT\raddTagValue 5.11.07 PackageDef depencies="VO_ALICE@APISCONFIG::V2.2"\rPre- and Post-Installation scripts\rPre- and Post-Installation scripts are defined as dependencies via metadata tags on the software version directory. The schema is again PackageDef, the tags are:\rpre_install\rpost_install\rThe assigned tag value references the pre-/post-installation script with its logical AliEn file name. \rThe ROOT AliEn Interface \rInstallation of ROOT with AliEn support\rThis document proposes three different ways to install ROOT. If you develop within the ROOT framework and you need to modify ROOT itself, it is recommended to follow \13 REF Ref_ROOT-Install-1 \n \h \ 1\\15. If you don't need to develop parts of ROOT, you can use \13 REF Ref_ROOT-Install-2 \n \h \ 1\\15 (which recompiles ROOT on your machine) or \13 REF Ref_ROOT-Install-3 \n \h \ 1\\15 (which installs a precompiled binary).\rManual Source Installation from CVS\rLogin to the ROOT CVS server with 'cvs' as password\0:\0\r\0c\0v\0s\0 \0-\0d\0 \0:\0p\0s\0e\0r\0v\0e\0r\0:\0c\0v\0s\0@\0r\0o\0o\0t\0.\0c\0e\0r\0n\0.\0c\0h\0:\0/\0u\0s\0e\0r\0/\0c\0v\0s\0 \0l\0o\0g\0i\0n\0\r\0C\0h\0e\0c\0k\0o\0u\0t\0 \0t\0h\0e\0 \0R\0O\0O\0T\0 \0s\0o\0u\0r\0c\0e\0 \0c\0o\0d\0e\0,\0 \0e\0i\0t\0h\0e\0r\0 \0t\0h\0e\0 \0C\0V\0S\0 \0H\0e\0a\0d\0:\0\r\0c\0v\0s\0 \0-\0d\0 \0:\0p\0s\0e\0r\0v\0e\0r\0:\0c\0v\0s\0@\0r\0o\0o\0t\0.\0c\0e\0r\0n\0.\0c\0h\0:\0/\0u\0s\0e\0r\0/\0c\0v\0s\0 \0c\0o\0 \0r\0o\0o\0t\0\r\0O\0r\0 \0a\0 \0t\0a\0g\0g\0e\0d\0 \0v\0e\0r\0s\0i\0o\0n\0 \0(\0 \0³ð \0v\05\0-\01\00\0-\00\00\0 \0)\0:\0\r\0c\0v\0s\0 \0-\0d\0 \0:\0p\0s\0e\0r\0v\0e\0r\0:\0c\0v\0s\0@\0r\0o\0o\0t\0.\0c\0e\0r\0n\0.\0c\0h\0:\0/\0u\0s\0e\0r\0/\0c\0v\0s\0 \0-\0r\0 \0v\05\0-\01\00\0-\00\00\0 \0c\0o\0 \0r\0o\0o\0t\0\r\0T\0h\0e AliEn module in ROOT needs GLOBUS to be enabled. You need to set the environment variable GLOBUS_LOCATION, e.g. the version installed by the AliEn installer is referenced by:\rexport GLOBUS_LOCATION=/opt/alien/globus (or setenv)\rRun the configure script enabling AliEn:\r./configure --enable-alien\rthe script will look for the API package in the default location '/opt/alien/api'. If the API is installed in another location, you can specify this using '--with-alien-incdir=<>' and '--with-alien-libdir=<>' options. E.g. if you have the API installed under $HOME/api, execute:\r./configure --enable-alien\r --with-alien-incdir=$HOME/include\v --with-alien-libdir=$HOME/lib\rCompile ROOT\rmake\rFor all questions concerning the ROOT installation in general, consult the ROOT web page \13 HYPERLINK ""\ 1\14\15 \rSource installation using AliEnBits\rFollow the instructions in \13 REF Ref_AliEnBits \n \h \ 1\145.3.3\15 which explain how to install the AliEnBits framework until (including) it comes to the 'configure' statement. Change into the ROOT application directory and start the compilation\rcd $HOME/alienbits/apps/alien/root\rmake \rIf you previously installed the API via AliEnBits, the ROOT configuration and compilation will start immediately. If not, the AliEnBits system will start to download all dependent packages and compile them beforehand.\rNote: the AliEnBits system will install the ROOT version defined in the build system for the AliEn release you are using. It is defined as the symbol 'GARVERSION' in $HOME/alienbits/apps/alien/root/root/Makefile. You cannot easily switch to another CVS tag following this procedure.\rBinary Installation using the AliEn installer\rFollow the steps in \13 REF Ref_AliEnInstaller \n \h \ 1\145.3.1\15 but select 'ROOT' as the package to be installed. If you selected the default localtion '/opt/alien/' in the installer, you will find ROOT installed under '/opt/alien/root'.\rROOT Startup with AliEn support - a quick test\rTo validate your installation, do the following test:\rUse the alien-proxy-init command to retrieve a shell token from an API service (see chapter \13 REF Ref_TokenCreation \n \h \ 1\145.6\15 ). \rIt is convenient to write a small script for the ROOT start-up:\r#!/bin/bash\rtest -z $ROOTSYS && export ROOTSYS=/opt/alien/root\rexport PATH=$ROOTSYS/bin:$PATH\rexport LD_LIBRARY_PATH=$ROOTSYS/lib:$LD_LIBRARY_PATH:/opt/alien/api/lib\rif [ -e /tmp/gclient_env_$UID ]; then\r source /tmp/gclient_env_$UID;\r root $*\rfi\rIf you got the ROOT prompt, execute\rTGrid::Connect("alien:",0,0,"t");\rThis uses your present session token and prints the 'message of the day (MOTD)' onto your screen. This method is described more in detail in the following subsections. \rThe ROOT TGrid/TAlien module\ 5\rROOT provides a virtual base class for GRID interfaces encapsulated in the TGrid class. The ROOT interface to AliEn is implemented in the classes TAlien and TAlienFile. TAlien is a plug-in implementation for the TGrid base class and is loaded, if you specify 'alien:' as the protocol to load in the static factory function of TGrid. TAlienFile is a plug-in implementation for the TFile base class.\rE.g. the factory function for TGrid is the 'Connect' method:\rTGrid* alien = TGrid::Connect("alien://");\rThis triggers the loading of the AliEn plug-in module and returns an instance of TAlien\rTFile is the base class for all file protocols. A TAlienFile is created in the same manner by the static factory function 'Open' of TFile:\rTFile* alienfile = TFile::Open("alien://....");\rThe following sections highlight the most important aspects of the interface and are not meant to be exhaustive. For more details see the ROOT documentation included in the source code, which is located in the 'root/alien' directory of the source.\rNote: Examples in this chapter are self contained, which means that they always start with a Connect statement.\rTGrid::Connect - Authentication and Session Creation\rThe first thing to do (see the quick test in \13 REF Ref_QuickTestConnect \n \h \ 1\140\15), in order to get access to AliEn data and job management functionalities, is to authenticate with an API service, or to use an already existing session token.\rAs described in \13 REF Ref_TokenLocation \n \h \ 1\145.5.1\15, we can store a session token within the ROOT application, or we access a session token that was established outside the application to be used by aliensh. \r//--- Load desired plug-in and setup conection to GRID\rstatic TGrid *Connect(const char *grid, const char *uid = 0, const char *pw = 0, const char *options = 0);\rSyntax \13 SEQ "Syntax" \*Arabic \141\15: TGrid::Connect\rConsider these example statements to initiate connections:\rConnect creating a memory token to a default API service :\rTGrid::Connect("alien://");\rConnect creating a memory token to a user-specified API service:\rTGrid::Connect("alien://");\rConnect creating a memory token with a certain role:\rTGrid::Connect("alien://","aliprod");\rConnect using an existing file token (created by alien-token-init):\rTGrid::Connect("alien://",0,0,"t");\rNote: the first method mentioned should apply to 99% of all use cases. If you use programs that fork, you should always use the file based token mechanism or call in every forked process the Connect method again. If you are using threads, you must lock the Command statements later on with a mutex lock.\rReturn Values :\r(TGrid*) 0 Connect failed\r(TGrid*) != 0 Connection established\rROOT sets the global variable gGrid automatically with the result of TGrid::Connect. If you deal only with one GRID connection, you can just use that one to call any of the TAlien methods e.g. gGrid->Ls() , but not TAlien::Ls() !\rTGrid::Connect("alien://") is equivalent to the call new TAlien(...), which bypasses the plug-in mechanism of ROOT!\rTAlien::Command - arbitrary command execution\rYou can execute any aliensh command with the Command method (except the cp command).\rTGridResult *Command(const char *command, bool interactive = kFALSE, UInt_t stream = kOUTPUT);\rSyntax \13 SEQ "Syntax" \*Arabic \142\15: TAlien::Command\rAs you have already seen, the API protocol returns four streams. The stream to be stored in a TGridResult is, by default, the result structure of each command (TAlien::kOUTPUT). Another stream can be selected using the stream parameter:\rSTDOUT\astream = TAlien::kSTDOUT\a\aSTDERR\astream = TAlien::kSTDERR\a\aresult structure\astream = TAlien::kOUTPUT\a\amisc. hash\astream = TAlien::kENVIR\a\aNote: you need to add #include <TAlien.h> to your ROOT code, in order to have the stream definitions available!\rIf you set interactive=kTRUE, the STDOUT+STDERR stream of the command appear on the terminal, otherwise it is silent.\rThe result structure and examples for using the Command method are explained in the next section.\rContinue reading until section \13 REF Ref_TAlienLs \n \h \14Error! Reference source not found.\15 and then try the example given.\rTAlienResult - the result structure of TAlien::Command\rTAlienResult is the plug-in implementation of a TGridResult returned by TAlien::Command. A TAlienResult is based on a TList which contains a TMap as list entries. It roughly is a list of key-value pairs. If you want to find out names of returned key names, just use the TAlienResult::Print("all") method or use the 'gbbox -d' command.\rTAlien::Ls - Example of Directory Listing in the File Catalogue\rConsider this example, which executes the 'ls -la' function, then dumps the complete result and finally loops over the result and prints all file names in the /alice directory:\rTGrid::Connect("alien://");\rTGridResult* result =gGrid->Command("ls -la/alice",0,TAlien::kOUTPUT); \rresult->Print("all");\rwhile (result->GetKey(i,"name")) printf("Listing file name: %s\n",result->GetKey(i++,"name") ; \rThe keys defined for an 'ls' operation are currently:\r\rls-la\a name, md5, size, group, path, permissions, user,date\a\als -m \amd5,path (here path is the full path name)\a\als -b \aguid,path (here path is the full path name)\a\als\a name,path\a\aSince this is a very common use case, there are convenience functions defined to simplify the syntax of the listing example:\rTGrid::Connect("alien://");\rTGridResult* result =gGrid->Ls("/alice",); \rwhile (result->GetFileName(i))\rprintf("Listing file name: %s\n",result->GetFileName(i++) ; \rTAlien::Cd + TAlien::Pwd - Example how to navigate the CWD \rThe CWD allows you to reference files without absolute path names. The CWD is by default (if you don't use a file token, where the CWD is shared between all sessions using the same token and stored on disk) your home directory after connecting.\rTo see the current working directory use the Pwd command: \rTGrid::Connect("alien://");\rprintf("Working Directory is %s\n",gGrid->Pwd());\rIt returns a const char* to your current workding directory name.\rTo navige the CWD use the Cd command:\rTGrid::Connect("alien://");\rBool result = gGrid->Cd("/alice");\rReturn Values :\rkTRUE Directory created\rkFALSE Directory creation failed\rTAlien::Mkdir - Example how to create a directory\rBool_t TAlien::Mkdir(const char* ldn, Option_t* options, Bool_t verbose)\rldn\aspecifies the logical directory name you want to create e.g. “/alice/”\a\aoptions \a flags for the command \a\averbose=kTRUE \acontrols verbosity\a\a\rTGrid::Connect("alien://");\rBool result = gGrid->Mkdir("mydirectory");\rReturn Values :\rkTRUE Directory created\rkFALSE Directory creation failed\rTAlien::Rmdir - Example how to remove a directory\rBool_t TAlien::Rmdir(const char* ldn, Option_t* options, Bool_t verbose)\r\rldn\aspecifies the logical directory name you want to remove e.g. “/alice/”\a\aoptions\aare flags for the command \a\averbose=kTRUE \acontrols on verbosity of the command\a\a\rTGrid::Connect("alien://");\rBool result = gGrid->Rmdir("mydirectory");\rReturn Values :\rkTRUE Directory removed\rkFALSE Directory deletion failed\rTAlien::Rm - Example how to remove a file entry\rBool_t TAlien::Rm(const char* lfn, Option_t* options, Bool_t verbose)\r\rlfn \aspecifies the logical file name you want to remove e.g. “/alice/”\a\aoptions \aare flags for the command (see \13 REF Ref_mkdir-shell \n \h \ 1\140\15). \a\averbose=kTRUE \aswitches on verbosity of the command\a\a\rTGrid::Connect("alien://");\rBool result = gGrid->Rm("myfile");\rReturn Values :\rkTRUE File entry removed\rkFALSE File entry deletion failed\rNote: as said previously - this function removes only file catalogue entries, no physical files\rTAlien::Query - Querying files in the Catalogue\rThe query function is a very convenient way to produce a list of files, which can be converted later into a ROOT TChain (e.g. to run a selector on your local machine or on a PROOF cluster).\rvirtual TGridResult *Query(const char *path, const char *pattern,\r const char *conditions = "", const char *options = "");\rThe syntax is straightforward:\r\rpath\aspecifies the node (directory) where to start searching in the directory tree\a\apattern \aspecifies a pattern to be matched in the full filename e.g. \a\a\a*.root\amatches all files with 'root' suffix\a\a\a*\amatches all files\a\a\agalice.root\amatches exact file names\a\aconditions\aconditions on metadata for the queried files\a\aoptions\aoptions to the query command (see find in the aliensh section)\a\a\rThis is a simple example querying all “*.root” files under a certain directory tree. The option “-l 5” sets a limit to return max. five files. \r\rA more advanced example using metadata is shown here:\r\rThe query returns also all metadata fields in the TGridResult object. You can use TGridResult::GetKey to retrieve certain metadata values:\r\rAll returned metadata values are in text (char*) format, and you have to apply the appropriate conversion function.\rFile Access using TFile - TAlienFile\rROOT has a plug-in mechanism for various file access protocols. These plug-ins are called via the static TFile::Open function. The protocol specified in the URL refers to the appropriate plug-in. TAlienFile is the implemented plug-in class for file registered in AliEn. Transfers are done using the TXNetFile class (xrootd) internally. To reference a logical file in AliEn, use the 'alien://' protocol or add '/alien' as prefix to the logical file name space:\rTFile::Open("alien:///alice/")\ropens an AliEn file in READ mode.\rTFile::Open("/alien/alice/");\ris equivalent to the first statement.\rTFile::Open("alien:///alice/","RECREATE")\ropens an AliEn file in WRITE mode using the file versioning system at the storage element 'ALICE::CERN::se01'.\rFile Copy operations in ROOT\rThe class TFileMerger implements besides File Merging functionality a copy function to copy from 'arbitrary' source to destionation URLs. Instead of using aliensh commands, you can copy a file within ROOT code following this example:\rTFileMerger m;\rm.Cp("alien:///alice/","file:/tmp/miniesd.root");\rThis works also when copying local to local files or AliEn to AliEn files.\r Submitting multiple jobs. \rIf you want to send similar JDLS, AliEn offers the possibility to submit a single jdl, a masterJob, and AliEn will split it in several subjobs. To define a masterJob, you have to put the field 'Split' in the jdl.\r5.11.1 Splitting a masterjob \vThere are several ways of splitting the job, depending on the input data or some productions variables. The possible values for the 'Split' field are:\rfile: There will be one sub job per file in the InputData section. \rdirectory: All the files of one directory will be analyzed in the same subjob. \rse: This should be used for most analysis. The jobs are split according to the list of storage elements where data are located. Job <1> reads all data at <SE1>, job <2> all data at <SE2>. You can, however, force to split on the second level the job <1> ,<2> . into several jobs using the two tags SplitMaxInputFileNumber and SplitMaxInputFileSize. \revent: All files with the same name of the last subdirectory will be analyzed \vin the same sub job. \ruserdefined: The user specifies the jdl of each subjob in SplitDefinitions \rProduction (<#start><#end>)): \vThis kind of split does not require any InputData. It will submit the \vsame JDL several times (from #start to #end). You can reference \vthis counter in SplitArguments using "#alien_counter#" \r\rAny of the previous methods can be combined with the field SplitArguments. In this field, you can define the arguments for each subjob. If you want e.g. to give the sub jobs counter produced by the Split="production:110" tag, you can write e.g. something like:\rSplitArguments = "simrun.C --run 1 --event #alien_counter#"; \r\vIf you define more than one value, each subjob will be submitted as many times as there are items in this array, and the subjobs will have the element in the array as arguments.\rSplitMaxInputFileNumber \vDefines the maximum number of files that are in each of the subjobs. \vFor instance, if you split per SE putting 'SplitMaxInputFileNumber=10', you can make sure that no subjob \vwill have more than ten input data files. \r\vSplitMaxInputFileSize Similar to the previous, but puts a limit on the size of the file. The size has to be given in bytes.\rSplitDefinitions \vThis is a list of JDLs. If defined by the user, AliEn will take those jdls as the subjobs, \vand all of them will behave as if they were subjobs \vof the original job (for instance, if \vthe original jobs gets killed, all of them will get killed, and once all of the subjobs \vfinish, their output will be copied to the master job). \r\r5.11.2 Split patterns: \rIn the JDL masterjob you can use split patterns, that will be independently translated for each of the subjobs. The possible patterns that you can use are:\r #alien_counter#: This will be an integer that increases with each subjob. If you want to have the number in a specific format, you can append the format attribute, according to the standard 'printf' syntax. For instance, #alien_conter%03i# will produce 001,002,003...\r #alien_split#: The value of this will be the category of the split mechanism that you are using. For instance, if you are splitting by SE, the #alien_split# will be replaced by the name of the SE. If you are splitting by file or directory, then the #alien_split# contains the name of the file\r#alien_filename#: This will be replaced by the full name of the InputData \r#alien_fulldir#: This will be replaced by the full directory name \r #alien_dir#: This will be replaced by the directory name. If you want the name of the directory more levels up, you can put more than one 'dir'. For instance, #alien_dirdir# will return the name of the directory two levels up, #alien_dirdirdir#, three levels, and so on and so for.\rFor the three last patterns, if the subjobs has more than one input data, you can speficy if you want the name of all of them, the first or only the last by adding 'all', 'first' and 'last' to the pattern. For instance, '#alien_first_filename#' will put only the name of the first file, while '#alien_all_filename#' will be replaced by all the files in the input data.\rAn example with all the patterns can be seen in the next figure: \rMasterJob \v Split="file"; \v Executable="date"; \v InputData={"LF:/mydir1/dir1/file1", "LF:/mydir2/dir2/file2"} \v Arguments="#alienfilename# #alienfulldir# #aliendir# #aliendirdir# #alien_counter# #alien_split#" \r\rSubjob1 \v Executable="date"; \v InputData="LF:/mydir/file1"; \v Arguments="file1 /mydir1/dir1/file1 dir1 mydir1 /mydir1/dir1/file1 1"\r \rSubjob2 \vExecutable="date"; \vInputData="LF:/mydir/file2"; \vArguments="file2 /mydir2/dir2/file2 dir2 mydir2 /mydir2/dir2/file2 2" \r\r5.11.3 Merging jobs \rIn the JDL of the masterjob you can also specify if you want to combine the output of all the subjobs. This is done with the field 'Merge'. In this field you can put a list of processes that have to be done when all the subjobs are in a final state. The syntax of the "Merge" is:\rMerge={"<input>:<jdl>:<output>" \v(,"<input2>:<jdl2>:<output2>")* } \rWhere <input> is the name of the file that you want to merge, <output>  is the name of the result, and <jdl> is the job that will be used to merge the files.\rThe output of the merge job will be copied to /proc/<user>/<masterjobid>/merge. If you want the output in another directory, you can specify it in the jdl field MergeOutput\rALICE already provides several standard merging mechanism under /alice/jdl/merge*.jdl \rAppendix JDL Syntax \rEvery JDL tag follows this syntax for single values:\r<tag-name> = "<value>";\ror\r<tag-name> = {"<value>"};\rfor a value list:\r<tag-name> = { "<val1>" , "<val2>", "<val3>" [ ... "<valN>"};\rJDL Tags\rExecutable\rThis is the only compulsory field in the jdl. It states the name of the lfn that will be executed. The file must be located either in /bin or /<VO>/bin or /<HOME>/bin \rArguments\rThese will be passed to the executable \rPackages\rThis constrains the execution of the job to be done at a site where the package is installed. You can also request a specific version of a package. For example Packages="AliRoot" will require the current version of AliRoot, or Packages="AliRoot::3.07.02" will require Version 3.07.02.\rInputFile\rA list of files that will be transported to the node where the job will be executed. Lfn’s have to be specified like “LF:/alice/”. \rInputData\rInputData is similar to InputFile, but the physical location of InputData adds automatically a requirement to the location of execution to the JDL. \rIt is required to execute the job in a site close to the files specified here. You can specify pattern like "LF:/alice/simulation/2003-02/V3.09.06/00143/*/tpc.tracks.root",and then all the LFN that satisfy this pattern will be included.\rIf you don't want the files to be staged to the sandbox (typical for Analysis) as it is done also for input files, you can specify “LF:/alice/....../file.root,nodownload”.\rWe recommend to use this tag only for a small number of files (<100) – otherwise the JDL becomes very large and slows down processing. If you have to specify many files, use InputDataCollection instead.\rInputDataCollection\rAn input data collection is used to specify long lists of input data files and allows to group corresponding files together. Use the find command to create an input data collection.\rThe input data collection file contains the InputData in XML list format that can be produced using the find command e.g.\rfind -x example1 /alice/ *.root > /tmp/example1.xml\rThis file is afterwards copied into AliEn using the cp command. You should use this mechanism, if you have many input files the submission is much fasterit is better for the job optimizer services you don't need to specify the InputData field\rInputDataList\rThis is the name of the file were the Job Agent saves the InputData list. The format of this file is specified in InputDataListFormat.\rInputDataListFormat\rThis is the list format of the InputData list. Possible formats are:\r“xml-single”\r“xml-group”\r'xml-single' implies, that every file is equivalent to one event. If you specify 'xml-group', a new event starts every time the first base filename appears again, e.g.\r "LF: ..../01/galice.root", \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0¬ð1\0s\0t\0 \0e\0v\0e\0n\0t\0\r\0 \0"\0L\0F\0:\0 \0.\0.\0.\0.\0/\00\01\0/\0K\0i\0n\0e\0m\0a\0t\0i\0c\0s\0.\0r\0o\0o\0t\0"\0,\0\r\0 \0"\0L\0F\0:\0 \0.\0.\0.\0.\0/\00\02\0/\0g\0a\0l\0i\0c\0e\0.\0r\0o\0o\0t\0"\0,\0 \0 \0¬ð ð2\0n\0d\0 \0e\0v\0e\0n\0t\0\r\0 \0"\0L\0F\0:\0 \0.\0.\0.\0.\0/\00\02\0/\0K\0i\0n\0e\0m\0a\0t\0i\0c\0s\0.\0r\0o\0o\0t\0"\0,\0\r\0 \0.\0.\0.\0.\0.\0.\0\r\0O\0u\0t\0p\0u\0t\0F\0i\0l\0e\0\r\0T\0h\0e\0 \0f\0i\0l\0e\0s\0 \0t\0h\0a\0t\0 \0w\0i\0l\0l\0 \0b\0e\0 \0r\0e\0g\0i\0s\0t\0e\0r\0e\0d\0 \0i\0n\0 \0t\0h\0e\0 \0c\0a\0t\0a\0l\0o\0g\0u\0e\0 \0a\0f\0t\0e\0r\0 \0t\0h\0e\0 \0j\0o\0b\0 \0h\0a\0s\0 \0f\0i\0n\0i\0s\0h\0e\0d\0.\0 \0Y\0o\0u\0 \0c\0a\0n\0 \0s\0p\0e\0c\0i\0f\0y\0 \0t\0h\0e\0 \0s\0t\0o\0r\0a\0g\0e\0 \0e\0l\0e\0m\0e\0n\0t\0 \0b\0y\0 \0a\0d\0d\0ing “@<SE-Name” to the file name.\rExample:\rOutputFile="histogram.root@ALICE::CERN::se01"\rOutputArchive\rHere you can define, which output files are archived in ZIP archives. Per default, AliEn puts all OutputFiles together in ONE archive. Example:\rThis writes two archives: one with all the log files + STDOUT + STDERR stored in the SE ALICE::CERN::se01, another archive containing ROOT files, which are stored in SE ALICE::CERN::Castor2.\rOutputArchive = \r {\r "*.root@Alice::CERN::Castor2",\r "log_archive:*.log,stdout,stderr@Alice::CERN::se01"\r };\rValidationcommand: \rThis specifies the script to be executed as validation. If the return value of that script is !=0, the job will terminate with status ERROR_V, otherwise SAVED\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0®ðD\0O\0N\0E\0.\0\r\0E\0m\0a\0i\0l\0\r\0I\0f\0 \0y\0o\0u\0 \0w\0a\0n\0t\0 \0t\0o\0 \0r\0e\0c\0e\0i\0v\0e\0 \0a\0n\0 \0e\0m\0a\0i\0l\0 \0w\0h\0e\0n\0 \0t\0h\0e\0 \0j\0o\0b\0 \0h\0a\0s\0 \0f\0i\0n\0i\0s\0h\0e\0d\0,\0 \0y\0o\0u\0 \0c\0a\0n\0 \0s\0p\0e\0c\0i\0f\0y\0 \0y\0o\0u\0r\0 \0e\0m\0a\0i\0l\0 \0a\0d\0d\0r\0e\0s\0s\0 \0h\0e\0r\0e\0.\0 \0T\0h\0i\0s\0 \0d\0o\0e\0s\0 \0n\0o\0t\0 \0y\0e\0t\0 \0w\0o\0r\0k\0 \0f\0o\0r\0 \0m\0a\0s\0t\0e\0r\0 \0j\0o\0b\0s\0.\0\r\0T\0T\0L\0\r\0H\0e\0r\0e\0 \0y\0o\0u\0 \0s\0p\0e\0c\0i\0f\0y\0 \0t\0h\0e\0 \0m\0a\0x\0i\0m\0u\0m\0 \0r\0u\0n\0-\0t\0i\0m\0e\0 \0f\0o\0r\0 \0y\0o\0u\0r\0 \0j\0o\0b\0.\0 \0T\0h\0e\0 \0s\0y\0s\0t\0e\0m\0 \0t\0h\0e\0n\0 \0s\0e\0l\0e\0c\0t\0e\0s\0 \0a\0 \0w\0o\0r\0k\0e\0r\0 \0n\0o\0d\0e\0 \0w\0h\0i\0c\0h\0 \0p\0r\0ovides the requested run time for you job.\rSplit; SplitArguments; SplitMaxInputFileNumber; SplitMaxInputFileSize; SplitDefinitions\rSee the 'Submitting multiple jobs' Section\rAppendix Job Status\rThe following flow chart shows the job status transitions after you have successfully submitted a job. It will help you to understand the meaning of the various error conditions.\rStatus Flow Diagram\r\ 1\r\rNon-Error Status Explanation\rIn the following we describe the non-error status. The abbreviation in brackets is what the ps command shows.\rINSERTING (I)\rThe job is waiting to be processed by the Job Optimizer. If this is a split job, the Optimizer will produce many sub jobs out of your master job. If this is a plain job, the Optimizer will prepare the input sandbox for this job.\rWAITING (W)\rThe job is waiting to be picked up by any Job Agent, that can fulfil the job requirements.\rASSIGNED (A)\rA Job Agent has matched your job and is about to pick it up.\rSTARTED (ST)\rA Job Agent is now preparing the input sandbox downloading the specified input files.\rRUNNING (R)\rYour executable has finally been started and is running.\rSAVING (SV)\rYour executable has successfully terminated, and the agent saves your output to the specified storage elements.\rSAVED (SVD)\rThe agent has successfully saved all the output files which are not yet visible in the catalogue.\rDONE (D)\rA central Job Optimizer has registered the output of your job in the catalog.\rError Status Explanation\rERROR_I (EI) - ERROR_A (EA)\rThese errors are normally not based on a 'bad' user job, but arise due to service failures.\rERROR_IB (EIB)\rThis is a common error which occurs during the download phase of the required input files into the sandbox. Usually its cause is, that a certain input file does not exist in the assumed storage element, or the storage element is not reachable from the job worker node.\rERROR_V (EV)\rThe validation procedure failed, i.e. your validation script (which you have specified in the JDL) exited with a value !=0.\rERROR_SV (ESV)\rAt least one output file could not be saved as requested in the JDL. Probably, one of the storage elements required was not available.\rZOMBIE/EXPIRED (Z/EXP)\rYour job got lost on a worker node. This can happen due to a node failure or a network interruption. The only solution is to re-submit the job.\rDistributed analysis\rAbstract\rIn order to perform physics analysis in ALICE, a physicist has to use the GRID infrastructure, since the data will be distributed to many sites. The machinery, though complex due to the nature of the GRID, is totally transparent to the user who is shielded by a friendly user interface. In order to provide some guidelines to successfully utilize the functionalities provided, it was decided that an up-to-date manual was needed. Here, we try to explain the analysis framework as of today taking into account all recent developments.\rAfter a short introduction, the general flow of a GRID based analysis will be outlined. The different steps to prepare one’s analysis and execute it on the GRID using the newly developed analysis framework will be covered in the subsequent sections.\rIntroduction\rBased on the official ALICE documents [\13 PAGEREF _RefE1 \h \ 1\14178\15, \13 PAGEREF _RefE2 \h \ 1\14178\15], the computing model of the experiment can be described as follows:\rTier 0 provides permanent storage of the raw data, distributes them to Tier 1 and performs the calibration and alignment task as well as the first reconstruction pass. The calibration procedure will also be addressed by PROOF clusters such as the CERN Analysis Facility (CAF) [\13 PAGEREF _RefE57 \h \ 1\14179\15]\rTier 1s outside CERN collectively provide permanent storage of a copy of the raw data. All Tier 1s perform the subsequent reconstruction passes and the scheduled analysis tasks.\rTier 2s generate and reconstruct the simulated Monte Carlo data and perform the chaotic analysis submitted by the physicists.\rThe experience of past experiments shows that the typical data analysis (chaotic analysis) will consume a large fraction of the total amount of resources. The time needed to analyze and reconstruct events depends mainly on the analysis and reconstruction algorithm. In particular, the GRID user data analysis has been developed and tested with two approaches: the asynchronous (batch approach) and the synchronous (interactive) analysis.\rBefore going into detail on the different analysis tasks, we would like to address the general steps a user needs to take before submitting an analysis job:\rCode validation: In order to validate the code, a user should copy a few AliESDs.root or AliAODs.root files locally and try to analyze them by following the instructions listed in section 6.5.\rInteractive analysis: After the user is satisfied from both the code sanity and the corresponding results, the next step is to increase the statistics by submitting an interactive job that will analyze ESDs/AODs stored on the GRID. This task is done in such a way to simulate the behaviour of a GRID worker node. If this step is successful then we have a high probability that our batch job will be executed properly. Detailed instructions on how to perform this task are listed in section 6.6.\rFinally, if the user is satisfied with the results from the previous step, a batch job can be launched that will take advantage of the whole GRID infrastructure in order to analyze files stored in different storage elements. This step is covered in detail in section 6.7.\rIt should be pointed out that what we describe in this note involves the usage of the whole metadata machinery of the ALICE experiment: that is both the file/run level metadata [\ 2] as well as the Event Tag System [\ 2]. The latter is used extensively, because apart from the fact that it provides an event filtering mechanism to the users and thus reducing the overall analysis time significantly, it also provides a transparent way to retrieve the desired input data collection in the proper format (= a chain of ESD/AOD files) which can be directly analyzed. On the other hand, if the Event Tag System is not used, then, apart from the fact that the user cannot utilize the event filtering, he/she also has to create the input data collection (= a chain of ESD/AOD files) manually.\rFlow of the analysis procedure\rFig 16. shows a schematic view of the flow of the analysis procedure. The first thing a typical user needs to do in order to analyze events stored on the GRID, is to interact with the file catalogue to define the collection of files that he/she needs. This action implies the usage of the metadata fields assigned at the level of a run. Detailed description about the structure of the file catalogue as well as the metadata fields on this level can be found in [\13 PAGEREF _RefE0 \h \ 1\14Error: Reference source not found\15, \ 2]. As an example, we have a user who wants to create a collection of tag-files that fulfil the following criteria:\rThe production year should be 2008.\rThe period of the LHC machine should be the first of 2008 (LHC08a).\rThe data should come from the third reconstruction pass.\rThe collision system should be p+p.\rThe start and stop time of the run should be beyond the 19th of March 2008 and no later than 10:20 of the 20th of March 2008 respectively.\ 5\rThen, what needs to be written in the AliEn shell (as a one line command) is:\r[aliensh] find -x pp /alice/data/2008/LHC08a/*/reco/Pass3/* *Merged*tag.root \r Run:collision_system=''pp'' and Run:stop<''2008-03-20 10:20:33'' \r and Run:start>''2008-03-19'' > pp.xml\rThe previous lines make use of the find command of the alien shell [\ 2]. The first argument of this command is the name of the collection which will be written inside the file (header of the xml collection). If the -x pp option is not used, we will not get back all the information about the file but instead we will just retrieve the list of logical file names (lfn). Then, the path of the file catalogue where the files we want to analyze are stored is given, followed by the name of the file. In this example, the path implies that the user requests a collection of real data (/alice/data) coming from the first period of the LHC machine in year 2008 (/2008/LHC08a), containing all the run numbers of the third pass of the reconstructed sample (/*/reco/Pass3). The last argument is the list of metadata fields: a variety of such fields can be combined using logical statements. The reader should notice that the output of this command is redirected to an xml file, which consists of all the necessary information (the file's unique identifier - guid, the logical file name etc) about the files that fulfil the imposed selection criteria. This xml collection, which is stored in the local working directory, plays a very important role in the overall distributed analysis framework, as we will see in the following sections. From the above example it is obvious that wild cards can be used.\rGoing back to the description of Fig. 16 and following the flow of the arrows, we assume that the user creates a tag xml collection. Then, in parallel inside a macro he/she imposes some selection criteria at the event level. Having those two components as an input, the Event Tag System is queried and from this action, the system can either provide directly the input data collection, which is a chain of ESD/AOD files along with the associated event list (which describes the events that fulfil the imposed selection criteria), or a new ESD/AOD xml (different to the tag xml collection discussed earlier) collection (mainly used for batch analysis). Even in the latter case, we end up having the chain along with the associated event list. This chain can then be processed by an analysis manager [\13 PAGEREF _RefE57 \h \ 1\14179\15] either locally, in AliEn [\13 PAGEREF _RefE9 \h \ 1\14178\15], or using PROOF [\13 PAGEREF _RefE44 \h \ 1\14179\15].\r\ 1\rFigure \13 SEQ "Figure" \*Arabic \1416\15: The flow of the analysis procedure: Having as an input the tag xml collection that we create by querying the file catalogue and the selection criteria, we interact with the Event Tag System and we either get a chain along with the associated event list (events that fulfil the imposed selection criteria) or we create an ESD xml collection (for batch sessions) from which we create the ESD chain. This chain is processed with an analysis manager locally, in AliEn or even in PROOF.\rWe would also like to point out that we have two options on how to use the framework:\rWe can work with AliRoot and try all the examples that will be described in the next sections by loading all the corresponding libraries.\rWe can try to be as flexible as possible, by running ROOT along with the corresponding AliRoot libraries (e.g. in the case of the analysis of AliESDs.root or/and AliAODs.root we need the,, along with the - the latter is needed for the new analysis framework which will be described in the next section). These libraries are created from the compilation of the relevant AliRoot code which can be included in the so-called par file. This par file is nothing more than a tarball containing the .h and .cxx aliroot code along with the Makefile and Makefile.arch (needed to compile the AliRoot code in different platforms).\rThe user has these two possibilities, although for the following examples, we will concentrate on the case where we use the par files. The lines listed below show how we can setup, compile and load the from the ESD.par.\rconst char* pararchivename = "ESD";\r // Setup PAR File\r if (pararchivename) {\r char processline[1024];\r sprintf(processline,".! tar xvzf %s.par",pararchivename);\r gROOT->ProcessLine(processline);\r const char* ocwd = gSystem->WorkingDirectory();\r gSystem->ChangeDirectory(pararchivename);\r\r // check for and execute\r if (!gSystem->AccessPathName("PROOF-INF/")) {\r printf("*******************************\n");\r printf("*** Building PAR archive ***\n");\r printf("*******************************\n");\r\r if (gSystem->Exec("PROOF-INF/")) {\r Error("runProcess","Cannot Build the PAR Archive! - Abort!");\r return -1;\r }\r }\r // check for SETUP.C and execute\r if (!gSystem->AccessPathName("PROOF-INF/SETUP.C")) {\r printf("*******************************\n");\r printf("*** Setup PAR archive ***\n");\r printf("*******************************\n");\r gROOT->Macro("PROOF-INF/SETUP.C");\r }\r \r gSystem->ChangeDirectory("../");\r }\r gSystem->Load("");\r gSystem->Load("");\rAnalysis framework\rRecently, a new attempt has started that led to the development of a new analysis framework [\13 PAGEREF _RefE57 \h \ 1\14179\15]. By the time this note was written, the framework has been validated in processes that concern this document (GRID analysis). We will review the basic classes and functionalities of this system in this paragraph.\rThe basic classes that constitute this development are the following (we will give a full example of an overall implementation later):\rAliAnalysisDataContainer: The class that allows the user to define the basic input and output containers.\rAliAnalysisTask: This is the class the lines of which should be implemented by the user. In the source file we can write the analysis code.\rAliAnalysisManager: Inside such a manager the user defines the analysis containers, the relevant tasks as well as the connection between them.\rA practical example of a simple usage of this framework is given below where we extract a pt spectrum of all charged tracks (histogram) from an ESD chain (the examples can be found under the AnalysisMacros/Local/ directory of the PWG2 module of AliRoot).\rAliAnalysisTaskPt.h\r#ifndef AliAnalysisTaskPt_cxx\r#define AliAnalysisTaskPt_cxx\r\r// example of an analysis task creating a p_t spectrum\r// Authors: Panos Cristakoglou, Jan Fiete Grosse-Oetringhaus, Christian Klein-Boesing\r\rclass TH1F;\rclass AliESDEvent;\r\r#include "AliAnalysisTask.h"\r\rclass AliAnalysisTaskPt : public AliAnalysisTask {\r public:\r AliAnalysisTaskPt(const char *name = "AliAnalysisTaskPt");\r virtual ~AliAnalysisTaskPt() {}\r \r virtual void ConnectInputData(Option_t *);\r virtual void CreateOutputObjects();\r virtual void Exec(Option_t *option);\r virtual void Terminate(Option_t *);\r \r private:\r AliESDEvent *fESD; //ESD object\r TH1F *fHistPt; //Pt spectrum\r \r AliAnalysisTaskPt(const AliAnalysisTaskPt&); // not implemented\r AliAnalysisTaskPt& operator=(const AliAnalysisTaskPt&); // not implemented\r ClassDef(AliAnalysisTaskPt, 1); // example of analysis\r};\r\r#endif\r\rAliAnalysisTaskPt.cxx\r#include "TChain.h"\r#include "TTree.h"\r#include "TH1F.h"\r#include "TCanvas.h"\r\r#include "AliAnalysisTask.h"\r#include "AliAnalysisManager.h"\r\r#include "AliESDEvent.h"\r#include "AliESDInputHandler.h"\r\r#include "AliAnalysisTaskPt.h"\r\r// example of an analysis task creating a p_t spectrum\r// Authors: Panos Cristakoglou, Jan Fiete Grosse-Oetringhaus, Christian Klein-Boesing\r\rClassImp(AliAnalysisTaskPt)\r\r//________________________________________________________________________\rAliAnalysisTaskPt::AliAnalysisTaskPt(const char *name) \r : AliAnalysisTask(name, ""), fESD(0), fHistPt(0) {\r // Constructor\r\r // Define input and output slots here\r // Input slot #0 works with a TChain\r DefineInput(0, TChain::Class());\r // Output slot #0 writes into a TH1 container\r DefineOutput(0, TH1F::Class());\r}\r\r//________________________________________________________________________\rvoid AliAnalysisTaskPt::ConnectInputData(Option_t *) {\r // Connect ESD or AOD here\r // Called once\r\r TTree* tree = dynamic_cast<TTree*> (GetInputData(0));\r if (!tree) {\r Printf("ERROR: Could not read chain from input slot 0");\r } else {\r // Disable all branches and enable only the needed ones\r // The next two lines are different when data produced as AliESDEvent is read\r tree->SetBranchStatus("*", kFALSE);\r tree->SetBranchStatus("fTracks.*", kTRUE);\r\r AliESDInputHandler *esdH = dynamic_cast<AliESDInputHandler*> (AliAnalysisManager::GetAnalysisManager()->GetInputEventHandler());\r\r if (!esdH) {\r Printf("ERROR: Could not get ESDInputHandler");\r } else\r fESD = esdH->GetEvent();\r }\r}\r\r//________________________________________________________________________\rvoid AliAnalysisTaskPt::CreateOutputObjects() {\r // Create histograms\r // Called once\r\r fHistPt = new TH1F("fHistPt", "P_{T} distribution", 15, 0.1, 3.1);\r fHistPt->GetXaxis()->SetTitle("P_{T} (GeV/c)");\r fHistPt->GetYaxis()->SetTitle("dN/dP_{T} (c/GeV)");\r fHistPt->SetMarkerStyle(kFullCircle);\r}\r\r//________________________________________________________________________\rvoid AliAnalysisTaskPt::Exec(Option_t *) {\r // Main loop\r // Called for each event\r\r if (!fESD) {\r Printf("ERROR: fESD not available");\r return;\r }\r\r Printf("There are %d tracks in this event", fESD->GetNumberOfTracks());\r\r // Track loop to fill a pT spectrum\r for (Int_t iTracks = 0; iTracks < fESD->GetNumberOfTracks(); iTracks++) \r {\r AliESDtrack* track = fESD->GetTrack(iTracks);\r if (!track) \r {\r Printf("ERROR: Could not receive track %d", iTracks);\r continue;\r }\r\r fHistPt->Fill(track->Pt());\r } //track loop \r\r // Post output data.\r PostData(0, fHistPt);\r} \r\r//________________________________________________________________________\rvoid AliAnalysisTaskPt::Terminate(Option_t *) {\r // Draw result to the screen\r // Called once at the end of the query\r\r fHistPt = dynamic_cast<TH1F*> (GetOutputData(0));\r if (!fHistPt) {\r Printf("ERROR: fHistPt not available");\r return;\r }\r \r TCanvas *c1 = new TCanvas("AliAnalysisTaskPt","Pt",10,10,510,510);\r c1->cd(1)->SetLogy();\r fHistPt->DrawCopy("E");\r}\r\rThe AliAnalysisTaskPt is a sample task that inherits from the AliAnalysisTask class. The main functions that need to be implemented are the following:\rBasic constructor: Inside the constructor we need to define the type of the input (if any input is to be used) as well as of the output of the task (in our example the input is of the form of a ROOT's TChain while the output is a histogram – TH1F).\rConnectInputData Inside this function we need to initialize the input objects (get the TTree and the AliESDEvent object).\rCreateOutputObjects Create the output objects that will be written in the file (in our example we create the output histogram).\rExec: The place where the analysis code should be implemented.\rTerminate: The function inside of which we can draw histograms (as in our example) or perform any kind of actions (like merging of output).\rMacro that creates an AliAnalysisManager\r \r //____________________________________________//\r // Make the analysis manager\r AliAnalysisManager *mgr = new AliAnalysisManager("TestManager");\r AliVEventHandler* esdH = new AliESDInputHandler;\r mgr->SetInputEventHandler(esdH); \r //____________________________________________//\r // 1st Pt task\r AliAnalysisTaskPt *task1 = new AliAnalysisTaskPt("TaskPt");\r mgr->AddTask(task1);\r // Create containers for input/output\r AliAnalysisDataContainer *cinput1 = mgr->CreateContainer ("cchain1",TChain::Class(),AliAnalysisManager::kInputContainer);\r AliAnalysisDataContainer *coutput1 = mgr->CreateContainer("chist1", TH1::Class(),AliAnalysisManager::kOutputContainer,"Pt.ESD.root");\r \r //____________________________________________//\r mgr->ConnectInput(task1,0,cinput1);\r mgr->ConnectOutput(task1,0,coutput1);\r if (!mgr->InitAnalysis()) return;\r mgr->PrintStatus();\r mgr->StartAnalysis("local",chain);\r\rIn the previous lines, we first created a new analysis manager:\rAliAnalysisManager *mgr = new AliAnalysisManager(“TestManager”);\rThe next step is to set the input event handler which will allow us to retrieve inside our task a valid esd/aod object (in our example we get an AliESDEvent):\rAliVEventHandler* esdH = new AliESDInputHandler();\rmgr->SetInputEventHandler(esdH);\rWe then created a new AliAnalysisTaskPt object giving it a name and we assigned it to the manager:\rAliAnalysisTaskPt *task1 = new AliAnalysisTaskPt("TaskPt");\rmgr->AddTask(task1);\rThen, the input and output containers were created and their types defined:\rAliAnalysisDataContainer *cinput1 = mgr->CreateContainer("cchain1", TChain::Class(),AliAnalysisManager::kInputContainer);\rAliAnalysisDataContainer *coutput1 = mgr->CreateContainer("chist1", TH1::Class(),AliAnalysisManager::kOutputContainer,”Pt.ESD.root”);\rThe next step was to link the task with the corresponding input and output container while setting the created chain of files (in the following section we will review how we can create this chain) to our input:\rmgr->ConnectInput(task1,0,cinput1);\rmgr->ConnectOutput(task1,0,coutput1);\rFinally, we process the chain with the manager\rmgr->StartAnalysis(“local”,chain);\rIn order to give the user an idea of how the framework could look like in a complicated example, we provide the next figures. In Fig. 17 we show the flow of a typical analysis process:\r\ 1\rFigure \13 SEQ "Figure" \*Arabic \1417\15: The flow of a typical physics analysis.\rA user interacts with the file catalogue and creates a tag collection. Then, the Event Tag System is queried (details on how we can do this will be provided in the next sections) and a chain of files is created. While looping through the entries of this chain, we split our analysis into two branches. On the first of, we analyze the (real or simulated) data and we provide as a last step an output ROOT file with histograms. On the second branch, we mix the events (event mixing method) in order to express our background. This second branch creates an output of the form of an Analysis Object Data (AOD) [\13 PAGEREF _RefE2 \h \ 1\14178\15]. A new chain is then created from this AOD and after analyzing the corresponding entries (analysis of mixed events), the output ROOT file with the relevant histograms is created. Finally, a last process is launched that compares the two output ROOT files and extracts the studied signal.\rFig 18. shows how we can integrate the previous use case to the new framework. The first steps are identical: A user interacts with the file catalogue and creates a tag collection which is used as an input to query the Event Tag System and create a chain of files. This chain is the first input container and is assigned to the AliAnalysisManager. In parallel, we define two tasks which are also assigned to the manager and are both linked to the same input container (chain): The first one analyzes the input data and creates an output container (ROOT file with histograms - signal + background plots), while the second is designed to mix the events and create a second output container (AOD). In order to analyze our mixed events, we initialize a second analysis manager that links the input container (AOD) with the new task (analysis of mixed events) and create a third output container (ROOT file with histogram - background plots). Finally, the comparison task is added to the second manager. This task is triggered by the ending of the analysis of mixed events and takes two input containers (ROOT files having the signal + background plots and the pure background plots) while creating one output (extracted signal).\r\ 1\rFigure \13 SEQ "Figure" \*Arabic \1418\15: This figure shows how the physics analysis can be integrated in the new framework.\rIn the next paragraphs, we will be using the simplest possible case of manager and task, which has been described, at the beginning of this section.\rInteractive analysis with local ESDs\rWe assume that you have stored a few ESDs or AODs locally (the way to do this is described in detail in [\13 PAGEREF _RefE0 \h \ 1\14Error: Reference source not found\15]), and that the first step regarding the creation of the tag-files, which are also stored locally (under path), for these ESDs/AODs has been finished [\13 PAGEREF _RefE0 \h \ 1\14Error: Reference source not found\15].\rWe setup the par file and then we invoke the following lines that summarize what we have to do in order to analyze data stored locally using the Event Tag System:\rTo specify cuts, we either do\rAliRunTagCuts *runCutsObj = new AliRunTagCuts();\rrunCutsObj->SetRunId(340);\r// add more run cuts here...\r\rAliLHCTagCuts *lhcCutsObj = new AliLHCTagCuts();\rlhcCutsObj->SetLHCStatus(“test”);\r\rAliDetectorTagCuts *detCutsObj = new AliDetectorTagCuts();\rdetCutsObj->SetListOfDetectors(“TPC”);\r\rAliEventTagCuts *evCutsObj = new AliEventTagCuts();\revCutsObj->SetMultiplicityRange(2, 100);\r// add more event cuts here...\ror we use\rconst char *runCutsStr = "fAliceRunId == 340";\rconst char *lhcCutsStr = "fLHCTag.fLHCState == test";\rconst char *detCutsStr = "fDetectorTag.fTPC == 1";\rconst char *evCutsStr = "fEventTag.fNumberOfTracks >= 2 &&\r fEventTag.fNumberOfTracks <= 100";\r// extend the strings to apply more cuts\rThen, we chain the tag-files (keep in mind that the tag-files in this example are stored locally under path), we query the Event Tag System according to your cuts provided and we follow the example shown in the previous section to create a manager and a task:\r\r AliTagAnalysis *tagAna = new AliTagAnalysis(“ESD”);\r tagAna->ChainLocalPaths(“path”);\r TChain *chain = tagAna->QueryTags(runCutsObj, lhcCutsObj, detCutsObj, evCutsObj);\r //TChain *chain = tagAna->QueryTags(runCutsStr, lhcCutsStr, detCutsStr, evCutsStr);\r\r //____________________________________________//\r // Make the analysis manager\r AliAnalysisManager *mgr = new AliAnalysisManager("TestManager");\r AliVEventHandler* esdH = new AliESDInputHandler;\r mgr->SetInputEventHandler(esdH); \r //____________________________________________//\r // 1st Pt task\r AliAnalysisTaskPt *task1 = new AliAnalysisTaskPt("TaskPt");\r mgr->AddTask(task1);\r // Create containers for input/output\r AliAnalysisDataContainer *cinput1 = mgr->CreateContainer ("cchain1",TChain::Class(),AliAnalysisManager::kInputContainer);\r AliAnalysisDataContainer *coutput1 = mgr->CreateContainer("chist1", TH1::Class(),AliAnalysisManager::kOutputContainer,"Pt.ESD.root");\r \r //____________________________________________//\r mgr->ConnectInput(task1,0,cinput1);\r mgr->ConnectOutput(task1,0,coutput1);\r if (!mgr->InitAnalysis()) return;\r mgr->PrintStatus();\r mgr->StartAnalysis(“local”,chain);\rThere are two possible ways to impose run- and event-cuts. The first is to create the objects, called AliRunTagCuts, AliLHCTagCuts, AliDetectorTagCuts and AliEventTagCuts, whose member functions will take care of your cuts. The second is to provide the strings that describe the cuts you want to apply on the run and on the event level. In the following we will describe both methods.\rObject based cut strategy\rThe first step in the object based cut strategy is to create an AliRunTagCuts, AliLHCTagCuts, AliDetectorTagCuts and an AliEventTagCuts object:\rAliRunTagCuts *runCutsObj = new AliRunTagCuts();\rAliLHCTagCuts *lhcCutsObj = new AliLHCTagCuts();\rAliDetectorTagCuts *detCutsObj = new AliDetectorTagCuts();\rAliEventTagCuts *evCutsObj = new AliEventTagCuts();\rThese objects are used to describe the cuts imposed to your analysis, in order to reduce the number of runs and events to be analyzed to the ones effectively satisfying your criteria. There are many selections possible and they are provided as member functions of the two classes AliRunTagCuts, AliLHCTagCuts, AliDetectorTagCuts and AliEventTagCuts class. In case the member functions describe a range of an entity, the run, LHC status, detector configuration or event will pass the test if the described entity lies inclusively within the limits low \0\0\0\0\0\0\0\0\0\0\0£ð \0v\0a\0l\0u\0e\0 \0£ð \0h\0i\0g\0h\0.\0 \0I\0n\0 \0c\0a\0s\0e\0 \0o\0f\0 \0o\0n\0l\0y\0 \0o\0n\0e\0 \0a\0r\0g\0u\0m\0e\0n\0t\0,\0 \0t\0h\0e\0 \0r\0u\0n\0 \0o\0r\0 \0e\0v\0e\0n\0t\0 \0w\0i\0l\0l\0 \0p\0a\0s\0s\0 \0t\0h\0e\0 \0t\0e\0s\0t\0 \0i\0f\0 \0t\0h\0e\0 \0e\0n\0t\0i\0t\0y\0 \0i\0s\0 \0e\0q\0u\0a\0l\0 \0t\0o\0 \0t\0h\0e\0 \0i\0n\0p\0u\0t\0 \0f\0l\0a\0g\0 \0o\0r\0 \0m\0a\0s\0k\0 \0(\0v\0a\0l\0u\0e\0 \0=\0=\0 \0f\0l\0a\0g\0,\0 \0v\0a\0l\0u\0e\0 \0=\0=\0 \0m\0a\0s\0k\0)\0 \0o\0r\0,\0 \0i\0n\0 \0c\0a\0s\0e\0 \0o\0f\0 \0a\0 \0'\0M\0a\0x\0'\0 \0o\0r\0 \0'\0M\0i\0n\0'\0 \0i\0d\0e\0n\0t\0i\0f\0i\0e\0r\0,\0 \0i\0f\0 \0t\0h\0e\0 \0r\0u\0n\0 \0o\0r\0 \0e\0v\0e\0n\0t\0 \0q\0u\0a\0n\0t\0i\0t\0y\0 \0i\0s\0 \0l\0o\0w\0e\0r\0 \0o\0r\0 \0e\0q\0u\0a\0l\0 \0(\0q\0u\0a\0n\0t\0i\0t\0y\0 \0£ð \0v\0a\0l\0u\0e\0)\0 \0o\0r\0 \0h\0i\0g\0h\0e\0r\0 \0o\0r\0 \0e\0q\0u\0a\0l\0 \0(\0q\0u\0a\0n\0t\0i\0t\0y\0 \0³ð \0v\0a\0l\0u\0e\0)\0 \0t\0h\0a\0n\0 \0t\0h\0e\0 \0p\0r\0o\0v\0i\0d\0e\0d\0 \0v\0a\0l\0u\0e\0.\0 \0A\0 \0f\0u\0l\0l\0 \0l\0i\0s\0t\0 \0o\0f\0 \0a\0v\0a\0i\0l\0a\0b\0l\0e\0 \0r\0u\0n\0 \0a\0n\0d\0 \0e\0v\0e\0n\0t\0 \0c\0u\0t\0 \0f\0u\0n\0c\0t\0i\0o\0n\0s\0 \0c\0a\0n\0 \0b\0e\0 \0f\0o\0u\0n\0d\0 \0i\0n\0 \0A\0p\0p\0e\0n\0d\0i\0x\0\\0,\0\\0r\0e\0f\0{\0A\0p\0p\0:\0O\0b\0j\0e\0c\0t\0C\0u\0t\0s\0}\0\ 5\0.\0\r\0L\0e\0t\0 \0u\0s\0 \0c\0o\0n\0s\0i\0d\0e\0r\0 \0o\0n\0l\0y\0 \0a\0 \0c\0u\0t\0 \0o\0n\0 \0t\0h\0e\0 \0r\0u\0n\0 \0n\0u\0m\0b\0e\0r\0,\0 \0a\0 \0c\0u\0t\0 \0o\0n\0 \0t\0h\0e\0 \0L\0H\0C\0 \0s\0t\0a\0t\0u\0s\0,\0 \0a\0 \0c\0u\0t on the detector configuration and one on the multiplicity: All events with run-numbers other than 340, with LHC status other than “test”, with the TPC not included and with less than 2 and more than 100 particles will be discarded.\rrunCutsObj->SetRunId(340);\rlhcCutsObj->SetLHCStatus(“test”);\rdetCutsObj->SetListOfDetectors(“TPC”);\revCutsObj->SetMultiplicityRange(2, 100);\rYou can add as many other cuts as you like here.\rString based cut strategy\rContrary to the object based cut strategy, you also have the possibility to provide your desired cut criteria as strings. You can do that by creating two separate strings, one for the run cuts and one for the event cuts. The syntax is based on C and the string is later evaluated by the TTreeFormula mechanism of ROOT. Therefore a wide range of operators is supported (see the ROOT manual [\13 PAGEREF _RefF0 \h \ 1\14Error: Reference source not found\15] for details). The variables used to describe the run and event properties are the data members of the AliRunTagCuts, AliLHCTagCuts, AliDetectorTagCuts and the AliEventTagCuts classes. Because of the enhanced number of available operators, this system provides more flexibility.\rIn order to create the same cuts as in the object based example above, the two strings should look like this:\rconst char *runCutsStr = "fAliceRunId == 340";\rconst char *lhcCutsStr = "fLHCTag.fLHCState == test";\rconst char *detCutsStr = "fDetectorTag.fTPC == 1";\rconst char *evCutsStr = "fEventTag.fNumberOfTracks >= 2 && fEventTag.fNumberOfTracks <= 100";\rThe full list of available data members to cut on can be found in Appendix\,\ref{App:StringCuts}\ 5. Within the quotes you can easily extend your cut statements in C style syntax.\rRegardless of the way you choose to define your cuts, you create an AliTagAnalysis object, which is responsible to actually perform your desired analysis task.\rAliTagAnalysis *TagAna = new AliTagAnalysis(“ESD”);\rYou have to provide this object with the locally stored tags since we assumed at the beginning of this section that these files were created and were stored locally (in the next section we will see how we can use the Grid stored tags). In order to do this you have to specify the correct path where the tag file(s) is/are located\rtagAna->ChainLocalTags("path");\rThis function will pick up every file under the given path ending with tag.root. Now you ask your AliTagAnalysis object to return a TChain, imposing the event cuts as defined in the AliRunTagCuts, AliLHCTagCuts, AliDetectorTagCuts and AliEventTagCuts objects or by the two strings representing the run and event tags:\rTChain *chain = tagAna->QueryTags(runCutsObj, lhcCutsObj, detCutsObj, evCutsObj);\r\rTChain *chain = tagAna->QueryTags(runCutsStr, lhcCutsStr, detCutsStr, evCutsStr);\rThe two arguments must be of the same type: two Ali*TagCuts objects or two strings! If you don't want to impose run- or event-cuts, simply provide a NULL pointer.\rFinally, you process the TChain by invoking your analysis manager with the following line of code:\r //____________________________________________//\r // Make the analysis manager\r AliAnalysisManager *mgr = new AliAnalysisManager("TestManager");\r AliVEventHandler* esdH = new AliESDInputHandler;\r mgr->SetInputEventHandler(esdH); \r //____________________________________________//\r // 1st Pt task\r AliAnalysisTaskPt *task1 = new AliAnalysisTaskPt("TaskPt");\r mgr->AddTask(task1);\r // Create containers for input/output\r AliAnalysisDataContainer *cinput1 = mgr->CreateContainer ("cchain1",TChain::Class(),AliAnalysisManager::kInputContainer);\r AliAnalysisDataContainer *coutput1 = mgr->CreateContainer("chist1", TH1::Class(),AliAnalysisManager::kOutputContainer,"Pt.ESD.root");\r \r //____________________________________________//\r mgr->ConnectInput(task1,0,cinput1);\r mgr->ConnectOutput(task1,0,coutput1);\r if (!mgr->InitAnalysis()) return;\r mgr->PrintStatus();\r mgr->StartAnalysis(“local”,chain);\rOne thing to mention is that even in case you do not want to imply any run- and event-cuts, it is useful to use the chain of commands described above. You would then simply pass two NULL pointers to the AliTagAnalysis class. The advantage of this procedure is that this setup takes care of chaining all the necessary files for you.\rAll the files needed to run this example can be found inside the PWG2 module of AliRoot under the AnalysisMacros/Local directory.\rInteractive analysis with GRID ESDs\rOnce the first step (local and CAF analysis) was successful and we are satisfied from both the code and the results, we are ready to validate our code on a larger data sample. In this section, we will describe how we can analyze interactively (that is sitting in front of a terminal and getting back the results in our screen) files that are stored in the Grid. We will once again concentrate on the case where we use the Event Tag System [\13 PAGEREF _RefE0 \h \ 1\14Error: Reference source not found\15, \13 PAGEREF _RefE0 \h \ 1\14Error: Reference source not found\15].\rThe first thing we need to create is a collection of tag-files by querying the file catalogue. These tag-files, which are registered in the Grid, are the official ones created as a last step of the reconstruction code. Once we have a valid xml collection, we launch a ROOT session, we setup the par files (the way to do this has been described in detail in section ), we apply some selection criteria and we query the Event Tag System which returns the desired events in the proper format (a TChain along with the associated list of events that satisfy our cuts). The following lines give a snapshot of how a typical code should look like:\rUsage of AliRunTagCuts, AliLHCTagCuts, AliDetectorTagCuts and AliEventTagCuts classes\r // Case where the tag-files are stored in the file catalog\r // tag.xml is the xml collection of tag-files that was produced \r // by querying the file catalog.\r TGrid::Connect("alien://"); \r TAlienCollection* coll = TAlienCollection::Open("tag.xml");\r TGridResult* tagResult = coll->GetGridResult("",0,0);\r\r // Create a RunTagCut object\r AliRunTagCuts *runCutsObj = new AliRunTagCuts();\r runCutsObj->SetRunId(340);\r\r // Create a LHCTagCut object\r AliLHCTagCuts *lhcCutsObj = new AliLHCTagCuts();\r lhcCutsObj->SetLHCStatus(“test”);\r\r // Create a DetectorTagCut object\r AliDetectorTagCuts *detCutsObj = new AliDetectorTagCuts();\r detCutsObj->SetListOfDetectors(“TPC”);\r\r // Create an EventTagCut object\r AliEventTagCuts *evCutsObj = new AliEventTagCuts();\r evCutsObj->SetMultiplicityRange(2, 100);\r\r // Create a new AliTagAnalysis object and chain the grid stored tags\r AliTagAnalysis *tagAna = new AliTagAnalysis(“ESD”); \r tagAna->ChainGridTags(tagResult);\r\r // Cast the output of the query to a TChain\r TChain *chain = tagAna->QueryTags(runCutsObj, lhcCutsObj, detCutsObj, evCutsObj);\r //____________________________________________//\r // Make the analysis manager\r AliAnalysisManager *mgr = new AliAnalysisManager("TestManager");\r AliVEventHandler* esdH = new AliESDInputHandler;\r mgr->SetInputEventHandler(esdH); \r //____________________________________________//\r // 1st Pt task\r AliAnalysisTaskPt *task1 = new AliAnalysisTaskPt("TaskPt");\r mgr->AddTask(task1);\r // Create containers for input/output\r AliAnalysisDataContainer *cinput1 = mgr->CreateContainer ("cchain1",TChain::Class(),AliAnalysisManager::kInputContainer);\r AliAnalysisDataContainer *coutput1 = mgr->CreateContainer("chist1", TH1::Class(),AliAnalysisManager::kOutputContainer,"Pt.ESD.root");\r \r //____________________________________________//\r mgr->ConnectInput(task1,0,cinput1);\r mgr->ConnectOutput(task1,0,coutput1);\r if (!mgr->InitAnalysis()) return;\r mgr->PrintStatus();\r mgr->StartAnalysis(“local”,chain);\r\rUsage of string statements\r // Case where the tag-files are stored in the file catalog\r // tag.xml is the xml collection of tag-files that was produced \r // by querying the file catalog.\r TGrid::Connect("alien://"); \r TAlienCollection* coll = TAlienCollection::Open("tag.xml");\r TGridResult* tagResult = coll->GetGridResult("",0,0);\r\r //Usage of string statements//\r const char* runCutsStr = "fAliceRunId == 340";\r const char *lhcCutsStr = "fLHCTag.fLHCState == test";\r const char *detCutsStr = "fDetectorTag.fTPC == 1";\r const char* evCutsStr = "fEventTag.fNumberOfTracks >= 2 && fEventTag.fNumberOfTracks <= 100";\r\r // Create a new AliTagAnalysis object and chain the grid stored tags\r AliTagAnalysis *tagAna = new AliTagAnalysis(“ESD”); \r tagAna->ChainGridTags(tagResult);\r\r // Cast the output of the query to a TChain\r TChain *chain = tagAna->QueryTags(runCutsStr, lhcCutsStr, detCutsStr, evCutsStr);\r //____________________________________________//\r // Make the analysis manager\r AliAnalysisManager *mgr = new AliAnalysisManager("TestManager");\r AliVEventHandler* esdH = new AliESDInputHandler;\r mgr->SetInputEventHandler(esdH); \r //____________________________________________//\r // 1st Pt task\r AliAnalysisTaskPt *task1 = new AliAnalysisTaskPt("TaskPt");\r mgr->AddTask(task1);\r // Create containers for input/output\r AliAnalysisDataContainer *cinput1 = mgr->CreateContainer ("cchain1",TChain::Class(),AliAnalysisManager::kInputContainer);\r AliAnalysisDataContainer *coutput1 = mgr->CreateContainer("chist1", TH1::Class(),AliAnalysisManager::kOutputContainer,"Pt.ESD.root");\r \r //____________________________________________//\r mgr->ConnectInput(task1,0,cinput1);\r mgr->ConnectOutput(task1,0,coutput1);\r if (!mgr->InitAnalysis()) return;\r mgr->PrintStatus();\r mgr->StartAnalysis(“local”,chain);\r\rWe will now review the previous lines. Since we would like to access Grid stored files, we have to connect to the API server using the corresponding ROOT classes:\rTGrid::Connect("alien://");\rThen, we create a TAlienCollection object by opening the xml file (tag.xml) and we convert it to a TGridResult:\rTAlienCollection* coll = TAlienCollection::Open("tag.xml");\rTGridResult* tagResult = coll->GetGridResult("",0,0);\rwhere tag.xml is the name of the file (which is stored in the working directory) containing the collection of tag-files.\rThe difference of the two cases is located in the way we apply the event tag cuts. In the first case, we create an AliRunTagCuts, AliLHCTagCuts, AliDetectorTagCuts and an AliEventTagCuts object and impose our criteria at the run- and event-level of the Event Tag System, while in the second we use the string statements to do so. The corresponding lines have already been described in the previous section.\rRegardless of the way we define our cuts, we need to initialize an AliTagAnalysis object and chain the GRID stored tags by providing as an argument to the ChainGridTags function the TGridResult we had created before\rAliTagAnalysis *tagAna = new AliTagAnalysis(“ESD”);\rtagAna->ChainGridTags(tagResult);\rWe then query the Event Tag System, using the imposed selection criteria and we end up having the chain of ESD files along with the associated event list (list of the events that fulfil the criteria):\rTChain *chain = tagAna->QueryTags(runCutsObj, lhcCutsObj, detCutsObj, evCutsObj);\rfor the first case (usage of objects), or\rTChain *chain = tagAna->QueryTags(runCutsStr, lhcCutsStr, detCutsStr, evCutsStr);\rfor the second case (usage of sting statements).\rFinally we process the TChain by invoking our implemented task using a manager:\r //____________________________________________//\r // Make the analysis manager\r AliAnalysisManager *mgr = new AliAnalysisManager("TestManager");\r AliVEventHandler* esdH = new AliESDInputHandler;\r mgr->SetInputEventHandler(esdH); \r //____________________________________________//\r // 1st Pt task\r AliAnalysisTaskPt *task1 = new AliAnalysisTaskPt("TaskPt");\r mgr->AddTask(task1);\r // Create containers for input/output\r AliAnalysisDataContainer *cinput1 = mgr->CreateContainer ("cchain1",TChain::Class(),AliAnalysisManager::kInputContainer);\r AliAnalysisDataContainer *coutput1 = mgr->CreateContainer("chist1", TH1::Class(),AliAnalysisManager::kOutputContainer,"Pt.ESD.root");\r \r //____________________________________________//\r mgr->ConnectInput(task1,0,cinput1);\r mgr->ConnectOutput(task1,0,coutput1);\r if (!mgr->InitAnalysis()) return;\r mgr->PrintStatus();\r mgr->StartAnalysis(“local”,chain);\rAll the files needed to run this example can be found inside the PWG2 module of AliRoot under the AnalysisMacros/Interactive directory.\rBatch analysis\rIn this section, we will describe the batch framework. We will first describe the flow of the procedure; we dedicate a sub-section to describe in detail the integration of the Event Tag System in the batch sessions. We will then use the next paragraphs to describe in detail the files needed to submit a batch job as well as the jdl syntax. Finally, we will provide a snapshot of a jdl and we will mention how we can submit a job on the grid and how we can see at any time its status.\rOverview of the framework\r\ 1\rFigure \13 SEQ "Figure" \*Arabic \1419\15: A schematic view of the flow of analysis in a batch session. Following the arrows, we have the initial xml collection that is listed in the jdl as an InputDataCollection field. The optimizer takes this xml and splits the master job into several sub-jobs while in parallel writing new xml collections on every worker node. Then the respective job agents on every site start a ROOT or AliRoot session, read these new xml collections and interact with the xroot servers in order to retrieve the needed files. Finally, after the analysis is completed, a single output file is created for every sub-job.\rFig. 19 shows the flow of a batch session. We start, as we have explained before, by querying the file catalogue and extracting a collection of files. This collection will be referenced by our jdl as an InputDataCollection field. Once we have created our jdl (a detailed description of the jdl syntax comes in the next sections) and all the files listed in it are in the proper place, we submit the job. The optimizer of the AliEn task queue parses the xml file and splits the master job into several smaller ones, each one assigned to a different site. In parallel, a new xml collection is written on every site, containing the information about the files to be analyzed on every worker node. This new collection will be noted in our jdl as an InputDataList field.\rThe corresponding job agent of every site starts the execution of the ROOT (in case we use the combination ROOT + par file) or AliRoot session, parses this new xml collection and interacts with the xrootd servers in order to retrieve the files that are listed inside these collections from the storage system. The analysis of these different sets of files results into the creation of several output files, each one containing the output of a sub-job. The user is responsible to launch a post-process that will loop over the different output files in order to merge them (an example on how to merge output histograms will be described in the next section).\rUsing the Event Tag System\rTo use the Event Tag System, we have to use some AliRoot classes that are in the STEER module. The main classes, as described in a previous section (section \13 REF _Ref35588153 \n \h \14Error! Reference source not found.\15), are the AliTagAnalysis, AliRunTagCuts, AliLHCTagCuts, AliDetectorTagCuts and AliEventTagCuts. In order to use the Event Tag System in a batch session, we need to perform a query the client side: starting from a tag xml collection obtained by querying the file catalogue, we define our selection criteria according to our physics analysis and we create a new xml collection having this time the information about the AliESDs. The user should realize that the initial xml collection, named in the examples as tag.xml, held the information about the location of the tag-files inside the file catalogue. Instead, what we create at this step is a new xml collection that will refer to the location of the ESD files. In this xml collection we also list the events that satisfy the imposed selection criteria for every ESD file. The following lines show how we can generate a new xml collection (you can find these lines in the CreateXML.C macro inside the STEER module):\rUsage of AliRunTagCuts and AliEventTagCuts classes\r // Case where the tag-files are stored in the file catalog\r // tag.xml is the xml collection of tag-files that was produced \r // by querying the file catalog.\r TGrid::Connect("alien://"); \r TAlienCollection* coll = TAlienCollection::Open("tag.xml");\r TGridResult* tagResult = coll->GetGridResult("",0,0);\r \r // Create a new AliTagAnalysis object\r AliTagAnalysis *tagAna = new AliTagAnalysis(“ESD”); \r\r // Create a tag chain by providing the TGridResult\r // from the previous step as an argument\r tagAna->ChainGridTags(tagResult);\r\r //Usage of AliRunTagCuts & AliEventTagCuts classes//\r // Create a RunTagCut object\r AliRunTagCuts *runCutsObj = new AliRunTagCuts();\r runCutsObj->SetRunId(340);\r\r // Create a LHCTagCut object\r AliLHCTagCuts *lhcCutsObj = new AliLHCTagCuts();\r lhcCutsObj->SetLHCStatus(“test”);\r\r // Create a DetectorTagCut object\r AliDetectorTagCuts *detCutsObj = new AliDetectorTagCuts();\r detCutsObj->SetListOfDetectors(“TPC”);\r\r // Create an EventTagCut object\r AliEventTagCuts *evCutsObj = new AliEventTagCuts();\r evCutsObj->SetMultiplicityRange(2, 100);\r\r // Create the esd xml collection:the first argument is the \r // collection name while the other two are the imposed criteria\r tagAna->CreateXMLCollection("global", runCutsObj, lhcCutsObj, detCutsObj, evCutsObj);\r\rUsage of string statements\r // Case where the tag-files are stored in the file catalog\r // tag.xml is the xml collection of tag-files that was produced \r // by querying the file catalog.\r TGrid::Connect("alien://"); \r TAlienCollection* coll = TAlienCollection::Open("tag.xml");\r TGridResult* tagResult = coll->GetGridResult("",0,0);\r \r // Create a new AliTagAnalysis object\r AliTagAnalysis *tagAna = new AliTagAnalysis(); \r\r // Create a tag chain by providing the TGridResult\r // from the previous step as an argument\r tagAna->ChainGridTags(tagResult);\r\r //Usage of string statements//\r const char* runCutsStr = "fAliceRunId == 340";\r const char *lhcCutsStr = "fLHCTag.fLHCState == test";\r const char *detCutsStr = "fDetectorTag.fTPC == 1";\r const char* evCutsStr = "fEventTag.fNumberOfTracks >= 2 &&\r fEventTag.fNumberOfTracks <= 100";\r\r // Create the esd xml collection:the first argument is the collection name \r // while the other two are the imposed criteria \r tagAna->CreateXMLCollection("global", runCutsStr, lhcCutsStr, detCutsStr, evCutsStr);\rThe reader should be familiar by now with the previous lines since they have already been described in detail. The new thing is the very last line of code where we call the CreateXMLCollection function of the AliTagAnalysis class which takes as arguments the name of the output xml collection (collection of ESDs) and the four run-lhc-detector and event-tag cuts (objects or strings). This output collection will be created in the working directory.\r\ 1\rFigure \13 SEQ "Figure" \*Arabic \1420\15: A schematic view of the flow of the analysis procedure in a batch session using the Event Tag System. Following the arrows, we have the initial xml collection which is created by the AliTagAnalysis class listed in the jdl as an InputDataCollection field. The optimizer takes this xml once the master job is submitted and splits it into several sub-jobs while in parallel writing new xml collections on every worker node. These xml collections hold the information about the events that satisfy the imposed selection criteria, grouped by file: the analysis is performed only on these events on every worker node.\rThe next step will be to create a jdl file inside of which this newly created xml collection (name global.xml in our example) will be define as an InputDataCollection field. Then we once again submit the job and the optimizer parses the xml and splits the master job into several sub-jobs. In parallel, a new xml collection is written on every site, containing the information about the files that will be analyzed on every worker node as well as the corresponding list of events that satisfy the imposed selection criteria for every file. Thus, on every worker node, we will analyze the created chain along with the associated event list as described in Fig. 20 Once finished, we will get several output files, over which we will have to loop with a post-process in order to merge them [\13 PAGEREF _RefE0 \h \ 1\14Error: Reference source not found\15, \13 PAGEREF _RefE0 \h \ 1\14Error: Reference source not found\15, \ 2].\rIn the following paragraphs we will provide some practical information about the batch sessions, starting from the files needed to submit a job, the jdl syntax etc.\rFiles needed\rThe files needed in order to submit a batch job are listed below [\13 PAGEREF _RefE0 \h \ 1\14Error: Reference source not found\15]:\rExecutable: This is a file that should be stored under the $HOME/bin AliEn directory of each user. It is used to start the ROOT/AliRoot session on every worker node. Users can always use existing executables that can be found under /bin. An example is given below.\rPar file: A par file is a tarball containing the header files and the source code of AliRoot, needed to build a certain library. It is used in the case where we do not want to launch AliRoot but instead we want to be flexible by launching ROOT along with the corresponding AliRoot library (e.g ROOT and the lib*.so). It is not compulsory although it is recommended to use a par file in an analysis.\rMacro: It is the file that each user needs to implement. Inside the macro we setup the par file (in case we use it) and we load the needed libraries. Then we open the input xml collection and convert it into a chain of trees. The next step is to create an AliAnalysisManager, assign a task to the manager and define the input and output containers. Finally, we process this chain with a selector. A snapshot of such a file has already been given in section \13 REF _Ref35866712 \n \h \14Error! Reference source not found.\15.\rXML collection: This is the collection created either by directly querying the file catalogue (in the case where we don't use the Event Tag System) or by querying the Event Tag System (case described in the previous paragraph).\rJDL: This is a compulsory file that describes the input/output files as well as the packages that we are going to use. A detailed description about the JDL fields is provided in the next lines.\rExample of an “executable”\r #!/bin/bash\r \r echo ===========================\r echo $PATH\r echo $LD_LIBRARY_PATH\r echo ==========================\r \r root -b -x runBatch.C;\rJDL syntax\rIn this section we will try to describe in detail the different jdl fields [\ 2].\rExecutable: It is the only compulsory field of the JDL where we give the logical file name (lfn) of the executable that should be stored in /bin or $VO/bin or $HOME/bin. A typical syntax can be:\rExecutable="";\rPackages: The definition of the packages that will be used in the batch session. The different packages installed can be found by typing packages in the AliEn shell [\13 PAGEREF _RefE42 \h \ 1\14179\15]. A typical syntax can be:\rPackages={"APISCONFIG::V2.4","VO_ALICE@ROOT::v5-16-00"};\rJobtag: A comment that describes the job. A typical syntax can be:\rJobtag={"comment:AliEn Tutorial batch example"};\rInputFile: In this field we define the files that will be transported to the node where the job will run and are needed for the analysis. A typical syntax can be:\rInputFile= {\r"LF:/alice/",\r"LF:/alice/",\r"LF:/alice/",\r"LF:/alice/",\r"LF:/alice/",\r"LF:/alice/",\r"LF:/alice/"};\rInputData: This field, when defined, requires that the job will be executed in a site close to files specified here. This is supposed to be used in the case where you don't want to use a collection of files. It should be pointed out that it is not really practical because it implies that each user writes a large number of lines in the jdl, thus making it difficult to handle. It should be pointed out that this approach can be useful in the case where we use a few files. A typical syntax can be:\rInputFile= {\r"LF:/alice/"}\rInputDataList: This is the name of the xml file created by the job agent after the job has been split, containing the lfn of the files of the closest storage element. A typical syntax can be:\rInputDataList="wn.xml";\rInputDataListFormat: The format of the previous field. It can be either ``xml-single'' where we imply that every xml entry corresponds to one file or “xml-group” where we imply that a new set of files starts every time the base filename appears (e.g. xml containing AliESDs.root, Kinematics.root, galice.root). In the context of this note, where we analyze only ESDs and not the generator information, we should use the first option. A typical syntax can be:\rInputDataListFormat="xml-single";\rOutputFile: Here we define the files that will be registered in the file catalogue once the job finishes. If we don't define the storage element, then the files will be registered in the default one which at the moment is Castor2 at CERN. A typical syntax can be:\rOutputFile={"stdout@ALICE::CERN::se",\r"stderr@ALICE::CERN::Castor2","*.root@ALICE::CERN::se"};\rOutputDir: Here we define the directory in the file catalogue under which the output files and archives will be stored. A typical syntax can be:\rOutputDir="/alice/";\rOutputArchive: Here we define the files that we want to be put inside an archive. It is recommended that the users use this field in order to place all their output files in such an archive which will be the only registered file after a job finishes. This is essential in the case of storage systems such as Castor, which are not effective in handling small files. A typical syntax can be:\rOutputArchive={"logarchive:stdout,stderr,*.log@Alice::CERN::se",\r"*.root@Alice::CERN::se"};\rValidationcommand: Specifies the script to be used as a validation script (used for production). A typical syntax can be:\rValidationcommand = "/alice/";\rEmail: If this field is defined, then you'll be informed that your job has finished from an e-mail. A typical syntax can be:\rEmail="";\rTTL: One of the important fields of the JDL. It allows the user to define the maximum time in seconds the job will run. This field is used by the optimizer for the ordering and the assignment of the priority for each job. Lower value of TTL provides higher probability for the job to run quickly after submission. If the running time exceeds the one defined in this field, then the job is killed automatically. The value of this field should not exceed 100000 sec. A typical syntax can be:\rTTL = "21000";\rSplit: Used in the case we want to split our master job into several sub-jobs. Usually the job is split per storage element (se). A typical syntax can be:\rSplit="se";\rSplitMaxInputFileNumber: Used to define the maximum number of files that will be analyzed on every worker node. A typical syntax can be:\rSplitMaxInputFileNumber="100";\rIn summary, the following lines give a snapshot of a typical jdl:\rExample of a JDL file\r# this is the startup process for root\rExecutable="";\rJobtag={"comment:AliEn tutorial batch example - ESD"};\r\r# we split per storage element\rSplit="se";\r\r# we want each job to read 100 input files\rSplitMaxInputFileNumber="5";\r\r# this job has to run in the ANALYSIS partition\rRequirements=( member(other.GridPartitions,"Analysis") );\r\r# validation command\rValidationcommand ="/alice/";\r\r# we need ROOT and the API service configuration package\rPackages={"APISCONFIG::V2.4","VO_ALICE@ROOT::v5-16-00"};\r\r# Time to live\rTTL = "30000";\r\r# Automatic merging\rMerge={"Pt.ESD.root:/alice/jdl/mergerootfile.jdl:Pt.ESD.Merged.root"};\r\r# Output dir of the automatic merging\rMergeOutputDir="/alice/";\r\r#ROOT will read this collection file to know, which files to analyze\rInputDataList="wn.xml";\r\r#ROOT requires the collection file in the xml-single format\rInputDataListFormat="merge:/alice/";\r\r# this is our collection file containing the files to be analyzed\rInputDataCollection="LF:/alice/,nodownload";\r\rInputFile= {\r"LF:/alice/",\r"LF:/alice/",\r"LF:/alice/",\r"LF:/alice/",\r"LF:/alice/",\r"LF:/alice/",\r"LF:/alice/"};\r\r# Output archive \rOutputArchive={"log_archive:stdout,stderr,*.log@Alice::CERN::se","*.root@Alice::CERN::se"};\r\r# Output directory\rOutputDir="/alice/";\r\r# email\rEmail="";\rJob submission - Job status\rAfter creating the jdl and registering all the files needed, we are ready to submit our batch job [\13 PAGEREF _RefE0 \h \ 1\14Error: Reference source not found\15]. This can be done by typing:\rsubmit $<filename>$.jdl\rat the AliEn shell prompt. If there is no mistake in our JDL, the job will be assigned a JOBID. We can always see what are the jobs submitted by a certain user by typing\rtop -user $<username>$\rLater on, we can check its status by typing:\rps -trace $<JOBID>$\rThe different states are:\rINSERTING: The job is waiting to be processed by the optimizer.\rSPLITTING: The optimizer starts splitting the job if this is requested in the JDL.\rSPLIT: Several sub-jobs were created from the master job.\rWAITING: The job is waiting to be assigned to a job agent that fulfils its requirements.\rASSIGNED: A job agent is about to pick up this job.\rSTARTED: The job agent is preparing the input sandbox and transferring the files listed in the InputFile field.\rRUNNING: The executable has started running.\rSAVING: The executable has finished running and the job agent saves the output to the specified storage elements.\rSAVED: The agent has successfully stored all the output files which are not available yet in the file catalogue.\rDONE: The central optimizer has registered the output in the catalogue.\rFinally, as long as a job status changes to RUNNING, a user can check its stdout and stderr by typing:\rspy $<JOBID>$ stdout\rspy $<JOBID>$ stderr\rat the AliEn prompt.\rMerging the output\rAssuming that everything worked out and that the status of the job we had submitted has turned to DONE, we are ready to launch the post-process that will access the different output files from every sub-job and merge them. We will concentrate in the case where the output files contain simple histograms (case which may represent the majority of the physics analyses). If the output files contain other analysis objects, then we should provide our own merge functions. The following lines should be put inside the jdl file:\rJDL lines that launch the automatic merging of the output\r# Automatic merging\rMerge={"Pt.ESD.root:/alice/jdl/mergerootfile.jdl:Pt.ESD.Merged.root"};\r\r# Output dir of the automatic merging\rMergeOutputDir="/alice/";\r\rThe previous lines allow the system to submit the mergerootfile.jdl that expects the output files to be named Pt.ESD.root and creates the Pt.ESD.Merged.root in the directory defined in the MergeOutputDir field.\rAll the needed files to run these examples can be found inside the PWG2 module of AliRoot under the AnalysisMacros/Batch directory.\rRun-LHC-Detector and event level cut member functions\rRun level member functions\r void SetRunId(Int_t Pid);\r void SetMagneticField(Float_t Pmag);\r void SetRunStartTimeRange(Int_t t0, Int_t t1);\r void SetRunStopTimeRange(Int_t t0, Int_t t1);\r void SetAlirootVersion(TString v);\r void SetRootVersion(TString v);\r void SetGeant3Version(TString v);\r void SetRunQuality(Int_t Pn);\r void SetBeamEnergy(Float_t PE);\r void SetBeamType(TString Ptype);\r void SetCalibVersion(Int_t Pn);\r void SetDataType(Int_t i);\rLHC level member functions\rvoid SetLHCState(TString state);\rvoid SetLHCLuminosityRange(Float_t low, Float_t high);\r\rDetector level member functions\rvoid SetListOfDetectors(const TString& detectors)\rEvent level member functions\r void SetNParticipantsRange(Int_t low, Int_t high);\r void SetImpactParamRange(Float_t low, Float_t high);\r\r void SetPrimaryVertexXRange(Float_t low, Float_t high);\r void SetPrimaryVertexYRange(Float_t low, Float_t high);\r void SetPrimaryVertexZRange(Float_t low, Float_t high);\r void SetPrimaryVertexFlag(Int_t flag);\r void SetPrimaryVertexZErrorRange(Float_t low, Float_t high);\r\r void SetTriggerMask(ULong64_t trmask);\r void SetTriggerCluster(UChar_t trcluster);\r\r void SetZDCNeutron1Range(Float_t low, Float_t high);\r void SetZDCProton1Range(Float_t low, Float_t high);\r void SetZDCEMRange(Float_t low, Float_t high);\r void SetZDCNeutron2Range(Float_t low, Float_t high);\r void SetZDCProton2Range(Float_t low, Float_t high);\r void SetT0VertexZRange(Float_t low, Float_t high);\r\r void SetMultiplicityRange(Int_t low, Int_t high);\r void SetPosMultiplicityRange(Int_t low, Int_t high);\r void SetNegMultiplicityRange(Int_t low, Int_t high);\r void SetNeutrMultiplicityRange(Int_t low, Int_t high);\r void SetNV0sRange(Int_t low, Int_t high);\r void SetNCascadesRange(Int_t low, Int_t high);\r void SetNKinksRange(Int_t low, Int_t high);\r\r void SetNPMDTracksRange(Int_t low, Int_t high);\r void SetNFMDTracksRange(Int_t low, Int_t high);\r void SetNPHOSClustersRange(Int_t low, Int_t high);\r void SetNEMCALClustersRange(Int_t low, Int_t high);\r void SetNJetCandidatesRange(Int_t low, Int_t high);\r\r void SetTopJetEnergyMin(Float_t low);\r void SetTopNeutralEnergyMin(Float_t low);\r void SetNHardPhotonsRange(Int_t low, Int_t high);\r void SetNChargedAbove1GeVRange(Int_t low, Int_t high);\r void SetNChargedAbove3GeVRange(Int_t low, Int_t high);\r void SetNChargedAbove10GeVRange(Int_t low, Int_t high);\r void SetNMuonsAbove1GeVRange(Int_t low, Int_t high);\r void SetNMuonsAbove3GeVRange(Int_t low, Int_t high);\r void SetNMuonsAbove10GeVRange(Int_t low, Int_t high);\r void SetNElectronsAbove1GeVRange(Int_t low, Int_t high);\r void SetNElectronsAbove3GeVRange(Int_t low, Int_t high);\r void SetNElectronsAbove10GeVRange(Int_t low, Int_t high);\r void SetNElectronRange(Int_t low, Int_t high);\r void SetNMuonRange(Int_t low, Int_t high);\r void SetNPionRange(Int_t low, Int_t high);\r void SetNKaonRange(Int_t low, Int_t high);\r void SetNProtonRange(Int_t low, Int_t high);\r void SetNLambdaRange(Int_t low, Int_t high);\r void SetNPhotonRange(Int_t low, Int_t high);\r void SetNPi0Range(Int_t low, Int_t high);\r void SetNNeutronRange(Int_t low, Int_t high);\r void SetNKaon0Range(Int_t low, Int_t high); \r void SetTotalPRange(Float_t low, Float_t high);\r void SetMeanPtRange(Float_t low, Float_t high);\r void SetTopPtMin(Float_t low);\r void SetTotalNeutralPRange(Float_t low, Float_t high);\r void SetMeanNeutralPtPRange(Float_t low, Float_t high);\r void SetTopNeutralPtMin(Float_t low);\r void SetEventPlaneAngleRange(Float_t low, Float_t high);\r void SetHBTRadiiRange(Float_t low, Float_t high);\rString base object and event level tags\rVariables for run cuts\r Int_t fAliceRunId; //the run id\r Float_t fAliceMagneticField; //value of the magnetic field\r Int_t fAliceRunStartTimeMin; //minimum run start date\r Int_t fAliceRunStartTimeMax; //maximum run start date\r Int_t fAliceRunStopTimeMin; //minmum run stop date\r Int_t fAliceRunStopTimeMax; //maximum run stop date\r TString fAlirootVersion; //aliroot version\r TString fRootVersion; //root version\r TString fGeant3Version; //geant3 version\r Bool_t fAliceRunQuality; //validation script\r Float_t fAliceBeamEnergy; //beam energy cm\r TString fAliceBeamType; //run type (pp, AA, pA)\r Int_t fAliceCalibrationVersion; //calibration version \r Int_t fAliceDataType; //0: simulation -- 1: data \rVariables for LHC cuts\rTo invoke one of these cuts, please make sure to use the fLHCTag. identifier. Example: "fLHCTag.fLHCState == "test".\rTString fLHCState; //LHC run conditions – comment\rFloat_t fLHCLuminosity; //the value of the luminosity\r\rVariables for detector cuts\rTo invoke one of these cuts, please make sure to use the fDetectorTag. identifier. Example: "fDetectorTag.fTPC == 1".\rBool_t fITSSPD; //ITS-SPD active = 1\r Bool_t fITSSDD; //ITS-SDD active = 1\r Bool_t fITSSSD; //ITS-SSD active = 1\r Bool_t fTPC; //TPC active = 1\r Bool_t fTRD; //TRD active = 1\r Bool_t fTOF; //TOF active = 1\r Bool_t fHMPID; //HMPID active = 1\r Bool_t fPHOS; //PHOS active = 1\r Bool_t fPMD; //PMD active = 1\r Bool_t fMUON; //MUON active = 1\r Bool_t fFMD; //FMD active = 1\r Bool_t fTZERO; //TZERO active = 1\r Bool_t fVZERO; //VZERO active = 1\r Bool_t fZDC; //ZDC active = 1\r Bool_t fEMCAL; //EMCAL active = 1\r\rVariables for event cuts\rTo invoke one of these cuts, please make sure to use the fEventTag. identifier. Example: "fEventTag.fNParticipants < 100".\r Int_t fNParticipantsMin, fNParticipantsMax;\r Float_t fImpactParamMin, fImpactParamMax;\r\r Float_t fVxMin, fVxMax;\r Float_t fVyMin, fVyMax;\r Float_t fVzMin, fVzMax;\r Int_t fPrimaryVertexFlag;\r Float_t fPrimaryVertexZErrorMin, fPrimaryVertexZErrorMax;\r\r ULong64_t fTriggerMask;\r UChar_t fTriggerCluster;\r \r Float_t fZDCNeutron1EnergyMin, fZDCNeutron1EnergyMax;\r Float_t fZDCProton1EnergyMin, fZDCProton1EnergyMax;\r Float_t fZDCNeutron2EnergyMin, fZDCNeutron2EnergyMax;\r Float_t fZDCProton2EnergyMin, fZDCProton2EnergyMax;\r Float_t fZDCEMEnergyMin, fZDCEMEnergyMax;\r Float_t fT0VertexZMin, fT0VertexZMax;\r\r Int_t fMultMin, fMultMax;\r Int_t fPosMultMin, fPosMultMax;\r Int_t fNegMultMin, fNegMultMax;\r Int_t fNeutrMultMin, fNeutrMultMax;\r Int_t fNV0sMin, fNV0sMax;\r Int_t fNCascadesMin, fNCascadesMax;\r Int_t fNKinksMin, fNKinksMax;\r \r Int_t fNPMDTracksMin, fNPMDTracksMax;\r Int_t fNFMDTracksMin, fNFMDTracksMax;\r Int_t fNPHOSClustersMin, fNPHOSClustersMax;\r Int_t fNEMCALClustersMin, fNEMCALClustersMax;\r Int_t fNJetCandidatesMin, fNJetCandidatesMax;\r\r Float_t fTopJetEnergyMin;\r Float_t fTopNeutralEnergyMin;\r \r Int_t fNHardPhotonCandidatesMin, fNHardPhotonCandidatesMax;\r Int_t fNChargedAbove1GeVMin, fNChargedAbove1GeVMax;\r Int_t fNChargedAbove3GeVMin, fNChargedAbove3GeVMax;\r Int_t fNChargedAbove10GeVMin, fNChargedAbove10GeVMax;\r Int_t fNMuonsAbove1GeVMin, fNMuonsAbove1GeVMax;\r Int_t fNMuonsAbove3GeVMin, fNMuonsAbove3GeVMax;\r Int_t fNMuonsAbove10GeVMin, fNMuonsAbove10GeVMax;\r Int_t fNElectronsAbove1GeVMin, fNElectronsAbove1GeVMax;\r Int_t fNElectronsAbove3GeVMin, fNElectronsAbove3GeVMax;\r Int_t fNElectronsAbove10GeVMin,fNElectronsAbove10GeVMax;\r Int_t fNElectronsMin, fNElectronsMax;\r Int_t fNMuonsMin, fNMuonsMax;\r Int_t fNPionsMin, fNPionsMax;\r Int_t fNKaonsMin, fNKaonsMax;\r Int_t fNProtonsMin, fNProtonsMax;\r Int_t fNLambdasMin, fNLambdasMax;\r Int_t fNPhotonsMin, fNPhotonsMax;\r Int_t fNPi0sMin, fNPi0sMax;\r Int_t fNNeutronsMin, fNNeutronsMax;\r Int_t fNKaon0sMin, fNKaon0sMax;\r Float_t fTotalPMin, fTotalPMax;\r Float_t fMeanPtMin, fMeanPtMax;\r Float_t fTopPtMin;\r Float_t fTotalNeutralPMin, fTotalNeutralPMax;\r Float_t fMeanNeutralPtMin, fMeanNeutralPtMax;\r Float_t fTopNeutralPtMin;\r Float_t fEventPlaneAngleMin, fEventPlaneAngleMax;\r Float_t fHBTRadiiMin, fHBTRadiiMax;\rSummary\rTo summarize, we tried to describe the overall distributed analysis framework by also providing some practical examples. The intention of this note, among other things, was to inform the users about the whole procedure starting from the validation of the code which is usually done locally up to the submission of GRID jobs. \rWe started off by showing how one can interact with the file catalogue and extract a collection of files. Then we presented how one can use the Event Tag System in order to analyze a few ESD/AOD files stored locally, a step which is necessary for the validation of the user's code (code sanity and result testing). The next step was the interactive analysis using GRID stored ESDs/AODs. Finally, we presented in detail the whole machinery of the distributed analysis and we also provided some practical examples to create a new xml collection using the Event Tag System. We also presented in detail the files needed in order to submit a GRID job and on this basis we tried to explain the relevant JDL fields and their syntax.\rWe should point out once again, that this note concentrated on the usage of the whole metadata machinery of the experiment: that is both the file/run level metadata [\13 PAGEREF _RefE0 \h \ 1\14Error: Reference source not found\15] as well as the Event Tag System [\13 PAGEREF _RefE0 \h \ 1\14Error: Reference source not found\15]. The latter is used because, apart from the fact that it provides us with a fast event filtering mechanism, it also takes care of the creation of the analyzed input sample in the proper format (TChain) in a fully transparent way. Thus, it is easy to plug it in as a first step in the new analysis framework [\13 PAGEREF _RefE57 \h \ 1\14179\15], which is being developed and validated as this note was written.\r\rAppendix\rKalman filter\rKalman filtering is quite a general and powerful method for statistical estimations and predictions. The conditions for its applicability are the following. A certain “system” is determined at any moment in time \13 EMBED Equation.DSMT4 \14\ 1\15 by a state vector \13 EMBED Equation.DSMT4 \14\ 1\15. The state vector varies with time according to an evolution equation\r\13 EMBED Equation.DSMT4 \14\ 1\15\rIt is supposed that \13 EMBED Equation.DSMT4 \14\ 1\15 a known deterministic function and \13 EMBED Equation.DSMT4 \14\ 1\15 is a random vector of intrinsic “process noise” which has a zero mean value (\13 EMBED Equation.DSMT4 \14\ 1\15) and a known covariance matrix (\13 EMBED Equation.DSMT4 \14\ 1\15). Generally, only some function \13 EMBED Equation.DSMT4 \14\ 1\15 of the state vector can be observed, and the result of the observation \13 EMBED Equation.DSMT4 \14\ 1\15 is corrupted by a “measurement noise” \13 EMBED Equation.DSMT4 \14\ 1\15:\r\13 EMBED Equation.DSMT4 \14\ 1\15\rThe measurement noise is supposed to be unbiased (\13 EMBED Equation.DSMT4 \14\ 1\15) and have a definite covariance matrix (\13 EMBED Equation.DSMT4 \14\ 1\15). In many cases, a matrix \13 EMBED Equation.DSMT4 \14\ 1\15 can represent the measurement function \13 EMBED Equation.DSMT4 \14\ 1\15:\r\13 EMBED Equation.DSMT4 \14\ 1\15\rIf, at a certain time \13 EMBED Equation.DSMT4 \14\ 1\15, we are given some estimates of the state vector \13 EMBED Equation.DSMT4 \14\ 1\15 and of its covariance matrix \13 EMBED Equation.DSMT4 \14\ 1\15, we can extrapolate these estimates to the next time slot \13 EMBED Equation.DSMT4 \14\ 1\15 by means of formulas (this is called “prediction”):\r\13 EMBED Equation.DSMT4 \14\ 1\15\rThe value of the predicted \13 EMBED Equation.DSMT4 \14\ 1\15 increment can be also calculated:\r\13 EMBED Equation.DSMT4 \14\ 1\15\rThe number of degrees of freedom is equal to the dimension of the vector \13 EMBED Equation.DSMT4 \14\ 1\15.\rIf at the moment \13 EMBED Equation.DSMT4 \14\ 1\15, together with the results of prediction, we also have the results of the state vector measurement, this additional information can be combined with the prediction results (this is called “filtering”). As a consequence, the estimation of the state vector improves with respect to the previous step:\r\13 EMBED Equation.DSMT4 \14\ 1\15\rwhere \13 EMBED Equation.DSMT4 \14\ 1\15 is the Kalman gain matrix \13 EMBED Equation.DSMT4 \14\ 1\15. Finally, the next formula gives us the value of the filtered \13 EMBED Equation.DSMT4 \14\ 1\15 increment:\r\13 EMBED Equation.DSMT4 \14\ 1\15\rIt can be shown that the predicted \13 EMBED Equation.DSMT4 \14\ 1\15 value is equal to the filtered one:\r\13 EMBED Equation.DSMT4 \14\ 1\15\rThe “prediction” and “filtering” steps are repeated as many times as we have measurements of the state vector.\r\rBayesian approach for combined particle identification\rParticle identification over a large momentum range and for many particle species is often one of the main design requirements of high energy physics experiments. The ALICE detectors are able to identify particles with momenta from 0.1 GeV/c up to 10 GeV/c. This can be achieved by combining several detecting systems that are efficient in some narrower and complementary momentum sub-ranges. The situation is complicated by the amount of data to be processed (about 107 events with about 104 tracks in each). Thus, the particle identification procedure should satisfy the following requirements:\rIt should be as much as possible automatic. \rIt should be able to combine PID signals of different nature (e.g. dE/dx and time-of-flight measurements).\rWhen several detectors contribute to the PID, the procedure must profit from this situation by providing an improved PID.\rWhen only some detectors identify a particle, the signals from the other detectors must not affect the combined PID.\rIt should take into account the fact that, due to different event and track selection, the PID depends on the kind of analysis.\rIn this report we will demonstrate that combining PID signals in a Bayesian way satisfies all these requirements.\rBayesian PID with a single detector\rLet \13 EMBED Equation.DSMT4 \14\ 1\15 be a conditional probability density function to observe in some detector a PID signal \13 EMBED Equation.DSMT4 \14\ 1\15 if a particle of \13 EMBED Equation.DSMT4 \14\ 1\15-type (\13 EMBED Equation.DSMT4 \14\ 1\15) is detected. The probability to be a particle of \13 EMBED Equation.DSMT4 \14\ 1\15-type if the signal \13 EMBED Equation.DSMT4 \14\ 1\15 is observed, \13 EMBED Equation.DSMT4 \14\ 1\15, depends not only on \13 EMBED Equation.DSMT4 \14\ 1\15, but also on how often this type of particles is registered in the considered experiment (“a priory” probability \13 EMBED Equation.DSMT4 \14\ 1\15 to find this kind of particles in the detector). The corresponding relation is given by the Bayes' formula:\r \13 EMBED Equation.DSMT4 \14\ 1\15 (1)\rUnder some reasonable conditions, \13 EMBED Equation.DSMT4 \14\ 1\15 and \13 EMBED Equation.DSMT4 \14\ 1\15 are not correlated so that one can rely on the following approximation:\rThe functions \13 EMBED Equation.DSMT4 \14\ 1\15 reflect only properties of the detector (“detector response functions”) and do not depend on other external conditions like event and track selections.\rOn contrary, the quantities \13 EMBED Equation.DSMT4 \14\ 1\15 (“relative concentrations” of particles of \13 EMBED Equation.DSMT4 \14\ 1\15-type) do not depend on the detector properties, but do reflect the external conditions, selections etc}.\rThe PID procedure is done in the following way. First, the detector response function is obtained. Second, a value \13 EMBED Equation.DSMT4 \14\ 1\15 is assigned to each track. Third, the relative concentrations \13 EMBED Equation.DSMT4 \14\ 1\15 of particle species are estimated for a subset of events and tracks selected in a specific physics analysis. Finally, an array of probabilities \13 EMBED Equation.DSMT4 \14\ 1\15 is calculated (see Equation 1 for each track within the selected subset.\rThe probabilities \13 EMBED Equation.DSMT4 \14\ 1\15 are often called PID weights. The conditional probability density function \13 EMBED Equation.DSMT4 \14\ 1\15 (detector response function) can be always parameterized with sufficient precision using available experimental data.\rIn the simplest approach, the a-priori probabilities \13 EMBED Equation.DSMT4 \14\ 1\15 (relative concentrations of particles of \13 EMBED Equation.DSMT4 \14\ 1\15-type) to observe a particle of $i$-type can be assumed to be equal.\rHowever, in many cases one can do better. Thus, for example in ALICE, when doing PID in the TPC for the tracks that are registered both in the TPC and in the Time-Of-Flight detector (TOF), these probabilities can be estimated using the measured time-of-flight. One simply fills a histogram of the following quantity:\r\13 EMBED Equation.DSMT4 \14\ 1\15\rwhere \13 EMBED Equation.DSMT4 \14\ 1\15 and \13 EMBED Equation.DSMT4 \14\ 1\15 are the reconstructed track momentum and length and \13 EMBED Equation.DSMT4 \14\ 1\15 is the measured time-of-flight. Such a histogram peaks near the values $m$ that correspond to the masses of particles.\rForcing some of the \13 EMBED Equation.DSMT4 \14\ 1\15 to be exactly zeros excludes the corresponding particle type from the PID analysis and such particles will be redistributed over other particle classes (see Equation 1). This can be useful for the kinds of analysis when, for the particles of a certain type, one is not concerned by the contamination but, at the same time, the efficiency of PID is of particular importance.\rPID combined over several detectors\rThis method can be easily applied for combining PID measurements from several detectors. Considering the whole system of \13 EMBED Equation.DSMT4 \14\ 1\15 contributing detectors as a single “super-detector” one can write the combined PID weights \13 EMBED Equation.DSMT4 \14\ 1\15 in the form similar to that given by Equation 1.\r\13 EMBED Equation.DSMT4 \14\ 1\15\rwhere \13 EMBED Equation.DSMT4 \14\ 1\15 is a vector of PID signals registered in the first, second and other contributing detectors, \13 EMBED Equation.DSMT4 \14\ 1\15 are the a priory probabilities to be a particle of the \13 EMBED Equation.DSMT4 \14\ 1\15-type (the same as in Equation 1) and \13 EMBED Equation.DSMT4 \14\ 1\15 is the combined response function of the whole system of detectors.\rIf the single detector PID measurements \13 EMBED Equation.DSMT4 \14\ 1\15 are uncorrelated (which is approximately true in the case of the ALICE experiment), the combined response function is product of single response functions \13 EMBED Equation.DSMT4 \14\ 1\15 (the ones in Equation 1):\r \13 EMBED Equation.DSMT4 \14\ 1\15 (2)\rOne obtains the following expression for the PID weights combined over the whole system of detectors:\r \13 EMBED Equation.DSMT4 \14\ 1\15 (3)\rIn the program code, the combined response functions \13 EMBED Equation.DSMT4 \14\ 1\15 do not necessarily have to be treated as analytical. They can be “procedures” (C++ functions, for example). Also, some additional effects like probabilities to obtain a mis-measurement (mis-matching) in one or several contributing detectors can be accounted for.\rThe formula Equation 3 has the following useful features:\rIf for a certain particle momentum one (or several) of the detectors is not able to identify the particle type (i.e. the \13 EMBED Equation.DSMT4 \14\ 1\15 are equal for all \13 EMBED Equation.DSMT4 \14\ 1\15), the contribution of such a detector cancels out from the formula.\rWhen several detectors are capable of separating the particle types, their contributions are accumulated with proper weights, thus providing an improved combined particle identification.\rSince the single detector response functions $r(s|i)$ can be obtained in advance at the calibration step and the combined response can be approximated by Equation 2, a part of PID (calculation of the \13 EMBED Equation.DSMT4 \14\ 1\15) can be done track-by-track “once and forever” by the reconstruction software and the results can be stored in the Event Summary Data. The final PID decision, being dependent via the a priory probabilities \13 EMBED Equation.DSMT4 \14\ 1\15 on the event and track selections, is then postponed until the physics analysis of the data.\r\rStability with respect to variations of the a priory probabilities\rSince the results of this PID procedure explicitly depend on the choice of the a priori probabilities \13 EMBED \14\ 1\15 (and, in fact, this kind of dependence is unavoidable in any case), the question of stability of the results with respect to the almost arbitrary choice of \13 EMBED \14\ 1\15 becomes important.\rFortunately, there is always some momentum region where the single detector response functions for different particle types of at least one of the detectors do not significantly overlap, and so the stability is guaranteed. The more detectors enter the combined PID procedure, the wider this momentum region becomes and the results are more stable.\rDetailed simulations using the AliRoot framework show that results of the PID combined over all the ALICE central detectors are, within a few per cent, stable with respect to variations of \13 EMBED \14\ 1\15 up-to at least 3 Gev/c.\rFeatures of the Bayesian PID\rParticle Identification in ALICE experiment at LHC can be done in a Bayesian way. The procedure consists of three parts:\rFirst, the single detector PID response functions \13 EMBED \14\ 1\15 are obtained. The calibration software does this.\rSecond, for each reconstructed track the combined PID response \13 EMBED \14\ 1\15 is calculated and effects of possible mis-measurements of the PID signals can be accounted for. The results are written to the Event Summary Data and, later, are used in all kinds of physics analysis of the data. This is a part of the reconstruction software.\rAnd finally, for each kind of physics analysis, after the corresponding event and track selection is done, the a priori probabilities \13 EMBED \14\ 1\15 to be a particle of a certain \13 EMBED \14\ 1\15-type within the selected subset are estimated and the PID weights \13 EMBED \14\ 1\15 are calculated by means of formula Equation 3. This part of the PID procedure belongs to the analysis software.\rThe advantages of the particle identification procedure described here are:\rThe fact that, due to different event and rack selection, the PID depends on a particular kind of performed physics analysis is naturally taken into account.\rCapability to combine, in a common way, signals from detectors having quite different nature and shape of the PID response functions (silicon, gas, time-of-flight, transition radiation and Cerenkov detectors).\rNo interactive multidimensional graphical cuts are involved. The procedure is fully automatic.\rVertex estimation using tracks\rEach track, reconstructed in the TPC and in the ITS, is approximated with a straight line at the position of the closest approach to the nominal primary vertex position (the nominal vertex position is supposed to be known with a precision of 100–200 (m). Then, all possible track pairs \13 EMBED Equation.DSMT4 \14\ 1\15 are considered and for each pair, the centre \13 EMBED Equation.DSMT4 \14\ 1\15 of the segment of minimum approach between the two lines is found. The coordinates of the primary vertex are determined as:\r \13 EMBED Equation.DSMT4 \14\ 1\15\rwhere \13 EMBED Equation.DSMT4 \14\ 1\15 is the number of track pairs. This gives an improved estimate of the vertex position.\rFinally, the position \13 EMBED Equation.DSMT4 \14\ 1\15 of the vertex is reconstructed minimizing the \13 EMBED Equation.DSMT4 \14\ 1\15 function (see [\ 2]):\r \13 EMBED Equation.DSMT4 \14\ 1\15\rwhere \13 EMBED Equation.DSMT4 \14\ 1\15 is the global position of the track \13 EMBED Equation.DSMT4 \14\ 1\15 (i.e. the position assigned at the step above) and \13 EMBED Equation.DSMT4 \14\ 1\15 is the covariance matrix of the vector \13 EMBED Equation.DSMT4 \14\ 1\15.\rIn order not to spoil the vertex resolution by including in the fit tracks that do not originate from the primary vertex (e.g. strange particle decay tracks), the tracks giving a contribution larger than some value \13 EMBED Equation.DSMT4 \14\ 1\15 to the global \13 EMBED Equation.DSMT4 \14\ 1\15 are removed one-by-one from the sample, until no such tracks are left. The parameter \13 EMBED Equation.DSMT4 \14\ 1\15 was tuned, as a function of the event multiplicity, so as to obtain the best vertex resolution.\r\rAlignment framework\rBasic objects and alignment constants\rThe purpose of the ALICE alignment framework is to offer all the functionality related to storing alignment information, retrieving it from the Offline Conditions Data Base (OCDB) and consistently applying it to the ALICE geometry in order to improve the knowledge of the real geometry by means of the additional information obtained by survey and alignment procedures, without needing to change the hard-coded implementation of the detector's geometry. The ALICE alignment framework is based on the AliAlignObj base class and its derived classes; each instance of this class is an alignment object storing the so called alignment constants for a single alignable volume, that is the information to uniquely identify the physical volume (specific instance of the volume in the geometry tree) to be displaced and to unambiguously describe the delta-transformation to be applied to that volume. In the ALICE alignment framework an alignment object holds the following information:\ra unique volume identifier\ra unique global index\ra delta-transformation\rIn the following we describe the meaning of these variables, how they are stored and set and the functionality related to them.\rThe unique volume identifier\rThe unique volume identifier is the character string which allows the user to access a specific physical volume inside the geometry tree. For the ALICE geometry (which is a ROOT geometry) this is the volume path that is the string containing the names of all physical volumes in the current branch in the directory tree fashion. For example “/A_1/B_i/.../M_j/Vol_k” identifies the physical volume “kth copy of the volume Vol” by listing its container volumes; going from right to left in the path corresponds to going from the innermost to the outermost containers and from the lowest to the upper level in the geometry tree, starting from the mother volume “M_j” of the current volume “Vol_k” up to the physical top volume “A_1”, the root of the geometry tree.\rThe unique volume identifier stored by the alignment object is not the volume path but a symbolic volume name, a string dynamically associated to the corresponding volume path by a hash table built at the finalization stage of the geometry (the physical tree needs to be already closed) and stored as part of it. The choice of the symbolic volume names is constrained only by the following two rules:\rEach name has to contain a leading sub-string indicating its pertaining sub-detector; in this way the uniqueness of the name inside the sub-detector scope guarantees also its uniqueness in the global scope of the whole geometry.\rEach name has to contain the intermediate alignable levels, separated by a slash (“/”), in case some other physical volume on the same geometry branch is in turn alignable.\rThere are two considerable advantages deriving from the choice to introduce the symbolic volume names as unique volume identifiers stored in the alignment object in place of the volume path:\rThe unique volume identifier has no direct dependency on the geometry; in fact changes in the volume paths reflect in changes in the hash table associating the symbolic names to them, which is built and stored together with the geometry. As a consequence the validity of the alignment objects is not affected by changes in the geometry and hence is in principle unlimited in time.\rThe unique volume identifier can be freely chosen, according to the two simple rules mentioned above, thus allowing to assign meaningful names to the alignable volumes, as opposed to the volume paths which inevitably are long strings of often obscure names.\rThe geometry then provides the user with some methods to query the hash table linking the symbolic volume names to the corresponding volume paths; in particular the user can \rget the number of entries in the table;\rretrieve a specific entry (symbolic volume name, volume path) either by index or by symbolic name.\rThe unique global index\rAmong the alignment constants we store a numerical index uniquely identifying the volume to which those constants refer; being a “short”, this numerical index has 16 bits available which are filled from the index of the “layer” or sub-detector to which the volume belongs (5 bits) and from the “local index”, i.e. the index of the volume itself inside the sub-detector (the remaining 11 bits). Limiting the range of sub-detectors to 25=32 and of alignable volumes inside each sub-detector to 211=2048, this suites our needs.\rThe aim of indexing the alignable volumes is to have a fast iterative access during alignment procedures. The framework allows to easily browse through the look-up table mapping indexes to symbolic volume names by means of methods which return the symbolic volume name for the present object given either its global index or both, its layer and local indexes. For these methods to work, the only condition is that at least one instance of an alignment object has been created, so that the static method building the look-up table has been called.\rThe delta-transformation\rThe delta-transformation is the transformation that defines the displacement to be applied to the given physical volume. During the alignment process we want to correct the hard-coded, ideal position of some volume, initially fixed according to the engineers' drawings, by including the survey and alignment information related to those volumes; we say that we want to align the ideal geometry. With this aim, we need here to describe how the delta-transformations are defined and thus how they have to be produced and applied to the ideal geometry in order to correct the global and local ideal transformations into global and local aligned transformations.\rFor the representation of the delta-transformation there are several possible conventions and choices, in particular: \rto use the local-to-global or the global-to-local convention and “active-” or “passive-transformations” convention;\rto use the local or global delta-transformation to be stored in the alignment object and to be passed when setting the object itself;\rthe convention used for the Euler angles representing the delta-transformation;\rthe use of a matrix or of a minimal set of parameters (three orthogonal shifts plus three Euler angles) to be stored in the alignment object and to be passed when setting the object itself.\rThe choices adopted by the framework are explained in the remainder of this section.\rUse of the global and local transformations\rBased on the ROOT geometry package, the framework keeps the “local-to-global” convention; this means that the global transformation for a given volume is the matrix \13 EMBED Equation.DSMT4 \14\ 1\15 that, as in TGeo, transforms the local vector \13 EMBED Equation.DSMT4 \14\ 1\15 (giving the position in the local reference system, i.e. the reference system associated to that volume) into the global vector \13 EMBED Equation.DSMT4 \14\ 1\15, giving the position in the global (or master) reference system (“MARS”), according to:\r \13 EMBED Equation.DSMT4 \14\ 1\15\rSimilarly, the local transformation matrix is the matrix \13 EMBED Equation.DSMT4 \14\ 1\15 that transforms a local vector \13 EMBED Equation.DSMT4 \14\ 1\15 into the corresponding vector in the mother volume RS, \13 EMBED Equation.DSMT4 \14\ 1\15, according to:\r \13 EMBED Equation.DSMT4 \14\ 1\15\rIf furthermore \13 EMBED Equation.DSMT4 \14\ 1\15 is the global transformation for the mother volume, then we can write:\r \13 EMBED Equation.DSMT4 \14\ 1\15 (10)\rRecursively repeating this argument to all the parent volumes, that is to all the volumes in the branch of the geometry tree which contains the given volume, we can write:\r \13 EMBED Equation.DSMT4 \14\ 1\15 (11)\rwhich shows that the global matrix is given by the product of the matrices of the parent volumes on the geometry branch, from the uppermost to the lowest level.\rLet's now denote by \13 EMBED Equation.DSMT4 \14\ 1\15 and $ \13 EMBED Equation.DSMT4 \14\ 1\15 the ideal global and local transformations of a specific physical volume (those relative to the reference geometry) and let's put the superscript 'a' to the corresponding matrices in the aligned geometry, so that \13 EMBED Equation.DSMT4 \14\ 1\15 and \13 EMBED Equation.DSMT4 \14\ 1\15 are the aligned global and aligned local transformations which relate the position of a point in the local RS to its position in the global RS and in the mother's RS respectively, after the volume has been aligned, according to:\r \13 EMBED Equation.DSMT4 \14\ 1\15 (12)\rEquations 12 are the equivalent of Equations 10 and 11 after the volume has been displaced.\rThere are two possible choices for expressing the delta-transformation:\rUse of the global delta-transformation \13 EMBED Equation.DSMT4 \14\ 1\15, that is the transformation to be applied to the ideal global transformation \13 EMBED Equation.DSMT4 \14\ 1\15 in order to get the aligned global transformation:\r \13 EMBED Equation.DSMT4 \14\ 1\15 (13)\rUse of the local delta-transformation \13 EMBED Equation.DSMT4 \14\ 1\15, that is the transformation to be applied to the ideal local transformation \13 EMBED Equation.DSMT4 \14\ 1\15 to get the aligned local transformation:\r \13 EMBED Equation.DSMT4 \14\ 1\15 (14)\rEquations 13 and 14 allow rewriting:\r \13 EMBED Equation.DSMT4 \14\ 1\15 (15)\ras:\r \13 EMBED Equation.DSMT4 \14\ 1\15 (16)\ror equivalently:\r \13 EMBED Equation.DSMT4 \14\ 1\15 (17)\rto relate global and local alignment.\rThe alignment object stores as delta-transformation the global delta-transformation; nevertheless both global and local delta-transformations can be used to construct the alignment object or to set it. The reasons for this flexibility in the user interface is that the local RS is sometimes the most natural one for expressing the misalignment, as e.g. in the case of a volume rotated around its centre; however the use of the local delta-transformation is sometimes error-prone; in fact the user has to be aware that he is referring to the same local RS which is defined in the hard-coded geometry when positioning the given volume, while the local RS used by simulation or reconstruction code can in general be different. In case the alignment object is constructed or its delta-transformation is set by means of the local delta-transformation, the framework will then use Equation 17 to perform the conversion into global alignment constants.\rAs for the choice of storing a symbolic volume name instead of the volume path as volume identifier, we would like to also make the delta-transformation stored in the alignment objects independent from the geometry, keeping thus their validity unconstrained. This is possible if we store in the geometry itself a matrix for the ideal global transformation related to that volume (this possibility is offered by the class storing the link between symbolic volume names and volume paths, see Section \13 REF _Ref31700373 \w \h \ 1\147.4.2\15.\r\rMatrix or parameters for the delta-transformation\rThe global delta-transformation can be saved both\ras a TGeoMatrix and\ras a set of six parameters, out of which three define the translation, by means of the shifts in the three orthogonal directions, and three define the rotation by means of three Euler angles.\rThese two cases correspond to choosing one of the following two AliAlignObj-derived classes:\rAliAlignObjMatrix: stores a TGeoHMatrix\rAliAlignObjAngles: stores six double precision floating point numbers;\rWhile storing the alignment constants in a different form, they appear with the same user interface, which allows setting the delta-transformation both via the matrix and via the six parameters that identify it.\rChoice for the Euler angles\rA general rotation in three-dimensional Euclidean space can be decomposed into and represented by three successive rotations about the three orthogonal axes. The three angles characterizing the three rotations are called Euler angles; however there are several conventions for the Euler angles, depending on the axes about which the rotations are carried out, right/left-handed systems, (counter-)clockwise direction of rotation, order of the three rotations.\rThe convention chosen in the ALICE alignment framework for the Euler angles is the xyz convention (see [\ 2]), also known as pitch-roll-yaw or Tait-Bryan angles, or Cardano angles convention. Following this convention, the general rotation is represented as a composition of a rotation around the z-axis (yaw) with a rotation around the y-axis (pitch) with a rotation around the x-axis (roll). There is an additional choice to fully specify the convention used, since the angles have opposite sign whether we consider them bringing the original RS in coincidence with the aligned RS (active-transformation convention) or the other way round (passive-transformation convention). In order to maintain our representation fully consistent with the TGeoRotation methods we choose the active-transformation convention, that is the opposite convention as the one chosen by the already referenced description of the pitch-roll-yaw angles [\13 PAGEREF _RefE66 \h \ 1\14180\15].\rTo summarise, the three angles – \13 EMBED \14\ 1\15 – used by the framework to represent the rotation part of the delta-transformation, unambiguously represent a rotation \13 EMBED \14\ 1\15 as the composition of the following three rotations:\ra rotation \13 EMBED \14\ 1\15 by an angle \13 EMBED \14\ 1\15 (yaw) around the z-axis\r \13 EMBED \14\ 1\15\ra rotation \13 EMBED \14\ 1\15 by an angle \13 EMBED \14\ 1\15 (pitch) around the y-axis\r \13 EMBED \14\ 1\15\ra rotation \13 EMBED \14\ 1\15 by an angle \13 EMBED \14\ 1\15 (roll) around the x-axis\r \13 EMBED \14\ 1\15\rwhich leads to:\r\13 EMBED \14\ 1\15\rUse of ROOT geometry functionality\rThe ALICE geometry is implemented via the ROOT geometrical modeller (often referred to as TGeo), a framework for building, browsing, navigating and visualising a detector's geometry, which is independent from the Monte Carlo transport (see [\ 2] and the dedicated chapter in [\ 2]). This choice allows the ALICE alignment framework to take advantage of using ROOT features such as its I/O, histogramming, browsing, GUI, ... . However, the main advantage of this choice is that the ALICE alignment framework can provide its specific functionality as a rather thin layer on top of already existing features which allow to consistently and efficiently manage the complexity related to modifying a tree of some million of physical nodes. The ALICE alignment framework takes in particular advantage of the possibility:\rto save the geometry to a file and upload it from a file;\rto check the geometry for overlaps and extrusions exceeding a given threshold;\rto query the geometry for the global and local matrix of a given physical volume;\rto make a physical node out of a specific physical volume and change the local and global transformation associated to it, while keeping track of the original transformations;\rto store a hash table of links between symbolic volume names and volume paths which can be queried in an efficient way.\rConcerning this last issue, the class representing the objects linking the symbolic volume names and the volume paths provides in addition the possibility to store a transformation. This feature turns out to be very useful if it is used to store the matrix relating the RS stored in the geometry (global transformation matrix for that volume) with the RS used in simulation and reconstruction (the two things in general differ).\rApplication of the alignment objects to the geometry\rThe base class provides a method to apply the single alignment object to the geometry present in memory, loaded from file or constructed; the method accesses the geometry to change the position of the volume referred by the unique volume identifier according to Equation (13). However this method alone cannot guarantee that the single object is applied correctly; the most common case is indeed the application of a set of alignment objects. In this case the framework has to check that the application of each object in the set does not invalidate the application of the others; when applying a set of alignment objects during a simulation or reconstruction run the framework transparently performs the following two checks:\rIn case of alignment objects referring to physical volumes on the same branch, they have to be applied starting from the one which refers to a volume at the uppermost level in the physical tree (container volume) down to the one at the lowest level (contained volume). On the contrary, if the contained volume is displaced first the subsequent displacement of the container volume would change its temporarily correct position;\rIn no case, two alignment objects should be applied to the same physical volume separately.\rThe reason for the first limitation is, in short, that the position of the contained volumes depend on the position of the container volumes. The reason for the second limitation is that the delta-transformations are relative to the ideal global position of the given volume (see Equation (13)), which then need not to have been previously modified by the previous application of an alignment object referring to the same volume. The tools used by the framework for checking that the two previous conditions are fulfilled are respectively:\rSorting the alignment objects based on a method which compares the depth of the physical volume to which the given alignment object refers.\rCombining more alignment objects referring to the same volume before applying them to the geometry.\rDuring a simulation or reconstruction run the user can consistently apply the objects to the geometry, having the two checks described above transparently performed.\rAn additional check is performed during a simulation or reconstruction run to verify that the application of the alignment objects did not introduce big overlaps or extrusions which would invalidate the geometry (hiding some sensitive parts or changing the material budget during tracking). This check is done by means of the overlap checker provided by the ROOT geometry package; a default threshold below which overlaps and extrusions are accepted is fixed; the TGeo overlap checker favours speed (checks the whole ALICE geometry in few seconds) at the expense of completeness, thus same rare overlap topologies can eventually escape the check.\rAccess to the Conditions Data Base\rAn important task of the ALICE alignment framework is to intermediate between the simulation and reconstruction jobs and the objects residing on the Offline Conditions Data Base (OCDB), both for defining a default behaviour and for managing specific use cases. The OCDB is filled with conditions (calibration and alignment) objects; the alignment objects in the OCDB are presently created by macros to reproduce two possible misalignment scenarios: the initial misalignment, according to expected deviations from the ideal geometry just after the sub-detectors are positioned and the residual misalignment, trying to reproduce the deviations which can not be resolved by the alignment procedures. The next step is to fill the OCDB with the alignment objects produced from the survey procedures, as soon as survey data are available to the offline. Finally these objects and those produced by alignment procedures will fill the OCDB to be used by the reconstruction of the real data in its different passes.\rThe OCDB stores the conditions making use of the database capabilities of a file system three-level directory structure; the run and the version are stored in the file name. If not otherwise specified, the OCDB returns the last version of the required object and in case of an object being uploaded it is automatically saved with increased version number.\rThe ALICE alignment framework defines a specific default storage from which to load the alignment objects for all the sub-detectors; the user can set a different storage, either residing locally or on the Grid if he has the permissions to access it. The definition of a non-default storage for the OCDB, as well as its deactivation can also be given for specific sub-detectors only, The user can also just switch off the loading of alignment objects from a OCDB storage or as a side-effect of passing to the simulation or reconstruction run an array of alignment objects available in memory.\rSummary\rThe ALICE alignment framework, based on the ROOT geometry package (see [\13 PAGEREF _RefE68 \h \ 1\14180\15, \13 PAGEREF _RefE67 \h \ 1\14180\15]), aims at allowing a consistent and flexible management of the alignment information, while leaving the related complexity as much as possible hidden to the user. The framework allows:\rSaving and retrieving the alignment constants relative to a specific alignable volume (automatic retrieval from a Conditions Data Base is handled);\rApply the alignment objects to the current (ideal) geometry;\rGet from the current geometry the alignment object for a specified alignable volume;\rTransform positions in the ideal global RS into positions in the aligned global RS;\rSet the objects by means of both global and local delta-transformations.\rThese functionalities are built on the AliAlignObj base class and its two derived classes, which store the delta-transformation by means of the transformation matrix (AliAlignObjMatrix) or by means of the six transformation parameters (AliAlignObjAngles). The user interface is the same in both cases; it fixes the representation of the delta-transformation while leaving several choices to the user which have been explained in this note together with their implementation.\rThe ALICE alignment framework fixes the following conventions:\rThe transformations are interpreted according to the local-to-global convention;\rThe delta-transformation stored is the global delta-transformation;\rThe three parameters to specify the rotation are the roll-pitch-yaw Euler angles, with the active-transformations convention.\rThe framework fixes also the following default behaviours in simulation and reconstruction runs:\rObjects are loaded from a default Conditions Data Base storage, on a sub-detector basis;\rThe set of loaded objects is sorted for assuring the consistency of its application to the geometry;\rThe ideal and aligned geometries are saved.\rSeveral choices related to the delta-transformation are left to the user, who:\rCan choose to set the alignment object either by passing a TGeoMatrix or by giving the six parameters which uniquely identify the global delta-transformation;\rCan choose if he wants the object to store either the TGeoMatrix, using an AliAlignObjMatrix or the six parameters, using an AliAlignObjAngles;\rCan choose if the transformation he is passing is the global delta-transformation or the local delta-transformation; in this latter case the framework converts it to the global one to set the internal data members.\rGlossary\rADC Analogue to Digital Conversion/Converter\rAFS Andrew File System\v\rALICE A Large Ion Collider Experiment\v\rAOD Analysis Object Data\rAPI Application Program Interface\rARDA Architectural Roadmap towards Distributed Analysis\v\rAliRoot ALICE offline framework\v\rCA Certification Authority\rCASTOR CERN Advanced STORage\v\rCDC Computing Data Challenge\rCDF Collider Detector at Fermilab\rCE Computing Element\v\rCERN European Organization for Nuclear Research\v\rCINT C/C++ INTerpreter that is embedded in ROOT\v\rCRT Cosmic Ray Trigger, the official name is ACORDE\vurl??\rCVS Concurrent Versioning System\v\rDAQ Data AcQuisition system\v\rDATE Data Acquisition and Test Environment\v\rDCA Distance of Closest Approach\rDCS Detector Control System\v\rDPMJET Dual Parton Model monte carlo event generator\v\rEGEE Enabling Grid for E-sciencE project\v\rEMCal Electromagnetic Calorimeter\rESD Event Summary Data\rFLUKA A fully integrated particle physics Monte Carlo simulation package\v\rFMD Forward Multiplicity Detector\v\rFSI Final State Interactions\rGAG Grid Application Group\v\rGUI Graphical User Interface\rGeVSim fast Monte Carlo event generator, base on MEVSIM\rGeant 4 A toolkit for simulation of the passage of particles through matter\v\rHBT Hanbury Brown and Twiss\rHEP High Energy Physics\rHEPCAL HEP Common Application Area\rHERWIG Monte Carlo package for simulating Hadron Emission Reactions With Interfering Gluons\v\rHIJING Heavy Ion Jet Interaction Generator\rHLT High Level Trigger\v\rHMPID High Momentum Particle Identification\v\rICARUS Imaging Cosmic And Rare Underground Signals\v\rIP Internet Protocol\rITS Inner Tracking System; collective name for SSD, SPD and SDD\rJETAN JET ANalysis module\rLCG LHC Computing Grid\v\rLDAP Lightweight Directory Access Protocol\rLHC Large Hadron Collider\v\rLSF Load Sharing Facility\v\rMC Monte Carlo\rMoU Memorandum of Understanding\rOCDB Offline Calibration DataBase\v\rOO Object Oriented\rOS Operating System\rPAW Physics Analysis Workstation\v\rPDC Physics Data Challenge\rPDF Particle Distribution Function\rPEB Project Execution Board\rPHOS PHOton Spectrometer\rPID Particle IDentity/IDentification\rPMD Photon Multiplicity Detector\v\rPPR Physics Performace Report\v\rPROOF Parallel ROOT Facility\v\rPWG Physics Working Group\v\rPYTHIA event generator\rQA Quality Assurance\rQCD Quantum ChromoDynamics\rQS Quantum Statistics\rRICH Ring Imaging CHerenkov\v\rROOT A class library for data analysis\v\rRTAG Requirements and Technical Assessment Group\rSDD Silicon Drift Detector\rSDTY Standard Data Taking Year\rSE Storage Element\rSI2k SpecInt2000 CPU benchmark\v\rSLC Scientific Linux CERN\v\rSOA Second Order Acronym\rSPD Silicon Pixel Detector\v\rSSD Silicon Strip Detector\rTDR Technical Design Report\v\rTOF Time Of Flight Detector\v\rTPC Time Projection Chamber\v\rTRD Transition Radiation Detector\v\rUI User Interface\rUID Unique IDentification number\rURL Universal Resource Locator\rVMC Virtual Monte Carlo\rVO Virtual Organization\rVOMS Virtual Organization Membership Service\rWAN Wide Area Network\rXML Extensible Markup Language\v\rZDC Zero Degree Calorimeter\rReferences\rThe ALICE Offline Bible\r\r\r\r \13 PAGE \145\15/\13 NUMPAGES \*Arabic \14183\15\r\r\r\r\r\r\r\ 5Explanation?\r\ 5Peter: Alien & Proof: Is it still going to run?\r\ 5Up-to-date?\r\ 5I think there is by far too much detail in this sub-section\r\ 5unclear\r\ 5something went wrong with copy&paste ?from latex?\r\ 5reference problem again\r\r\ 2 CERN/LHCC 2003-049, ALICE Physics Performance Report, Volume 1 (7 November 2003);\vALICE Collaboration: F. Carminati et al. J. Phys. G: Nucl. Part. Phys. 30 (2004) 1517–1763.\r\ 2 CERN-LHCC-2005-018, ALICE Technical Design Report: Computing, ALICE TDR 012 (15 June 2005).\r\ 2\r\ 2\r\ 2\r\ 2\r\ 2 H.-U. Bengtsson and T. Sjostrand, Comput. Phys. Commun. 46 (1987) 43;\vT. Sjostrand, Comput. Phys. Commun. 82 (1994) 74;\vthe code can be found in\v\r\ 2 X. N. Wang and M. Gyulassy, Phys. Rev. D44 (1991) 3501.\vM. Gyulassy and X. N. Wang, Comput. Phys. Commun. 83 (1994) 307-331.\vThe code can be found in\v\r\ 2 \13 HYPERLINK ""\ 1\14\15\r P. Saiz et al., Nucl. Instrum. Meth. A 502 2003 437–440\r\ 2\r\ 2\r\ 2\r\ 2\r\ 2\r\ 2\r\ 2\r\ 2\r\ 2\r\ 2\r\ 2\r\ 2 \r\ 2 J. Ranft, Phys. Rev. D 51 (1995) 64.\r\ 2 ALICE-INT-2003-036\r\ 2 A.~Morsch, \13 HYPERLINK ""\ 1\14\15 and \13 HYPERLINK ""\ 1\14\15\r\ 2 B. Andersson, et al., Phys. Rep. 97 (1983) 31.\r\ 2 B. Andersson, et al., Nucl. Phys. B281 (1987) 289;\vB. Nilsson-Almqvist and E. Stenlund, Comput. Phys. Commun. 43 (1987) 387.\r\ 2 A. Capella, et al., Phys. Rep. 236 (1994) 227.\r\ 2\r\ 2 HERWIG 6.5, G. Corcella, I.G. Knowles, G. Marchesini, S. Moretti, K. Odagiri, P. Richardson, M.H. Seymour and B.R. Webber, JHEP 0101 (2001) 010 [hep-ph/0011363]; hep-ph/0210213\r\ 2 L. Ray and R.S. Longacre, STAR Note 419.\r\ 2 S. Radomski and Y. Foka, ALICE Internal Note 2002-31.\r\ 2\r\ 2 L. Ray and G.W. Hoffmann. Phys. Rev. C 54, (1996) 2582, Phys. Rev. C 60, (1999) 014906.\r\ 2 P. K. Skowrónski, ALICE HBT Web Page, \13 HYPERLINK ""\ 1\14\15\r\ 2 A.M. Poskanzer and S.A. Voloshin, Phys. Rev. C 58, (1998) 1671.\r\ 2 A. Alscher, K. Hencken, D. Trautmann, and G. Baur. Phys. Rev. A 55, (1997) 396.\r\ 2 K. Hencken, Y. Kharlov, and S. Sadovsky, ALICE Internal Note 2002-27.\r\ 2\r\ 2 L. Betev, ALICE-PR-2003-279\r\ 2 CERN/LHCC 2005-049, ALICE Physics Performance Report, Volume 2 (5 December 2005);\r\ 2 P. Billoir; NIM A225 (1984) 352,\vP. Billoir et al., NIM A241 (1985) 115,\vR. Fruhwirth, NIM A262 (1987) 444,\vP. Billoir; CPC (1989) 390.\r\ 2\_Documentation.html\r\ 2\r\ 2\r\ 2 CERN/LHCC 99-12.\r\ 2 CERN/LHCC 2000-001.\r\ 2 P.Skowrónski, PhD Thesis.\r\ 2 \13 HYPERLINK ""\ 1\14\15\r\ 2 P. Christakoglou, P. Hristov, ALICE-INT-2006-023\r\ 2 \13 HYPERLINK ""\ 1\14\15\r\ 2 A. Shoshani, A. Sim, and J. Wu, “Storage resource managers: Middleware components for Grid storage”, in Proceedings of Nineteenth IEEE Symposium on Mass Storage Systems, 2002 (MSS 2002).\r\ 2 K. Wu et al., “Grid collector: An event catalog with automated file management”.\r\ 2\r\ 2\r\ 2\r\ 2\r\ 2 {RefAnalysisFramework}\13 HYPERLINK ""\ 1\14\15\r\ 2 \13 HYPERLINK "\&categ=a045061\&id=a045061s0t5/transparencies"\ 1\14\&categ=a045061\&id=a045061s0t5/transparencies\15\v\13 HYPERLINK ""\ 1\14\15\r\ 2 {RefFileCatalogMetadataNote}M. Oldenburg, ALICE internal note --- to be submitted to EDMS\r\r\ 2 {RefEventTagNote}P. Christakoglou and P. Hristov, "The Event Tag System", ALICE-INT-2006-023.\r\ 2 {RefFileCatalogMetadataWeb}\\RunTags.html\#Run/File\%20metadata\r\ 2 {RefAlienTutorial}\r\r\ 2 {RefEventTagWeb}\\EventTagsAnalysis.html\#Analysis\%20with\%20tags\r\ 2 {Note:RefGSHELL}\r\ 2 V. Karimäki, CMS Note 1997/051 (1997).\r\ 2\r\ 2 R. Brun, A. Gheata and M. Gheata, The ROOT geometry package, NIM A502 (2003) 676-680\r\ 2 ROOT User's Guide,\r\r\r\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\b\0\0\ 1\b\0\0\19\b\0\0/\b\0\00\b\0\0A\b\0\0B\b\0\0G\b\0\0H\b\0\0Ó\1a\0\0Ô\1a\0\0Õ\1a\0\0Ö\1a\0\0•\e\0\0H\1c\0\0I\1c\0\0j\1c\0\0k\1c\0\0–\1c\0\0—\1c\0\0·\1c\0\0,\1f\0\0-\1f\0\0n\1f\0\0o\1f\0\0p\1f\0\0¤\1f\0\0¥\1f\0\0T \0\0U \0\0£ \0\0¤ \0\05!\0\06!\0\0¨!\0\0©!\0\0µ!\0\0¶!\0\0c"\0\0d"\0\0¾"\0\0¿"\0\0Í"\0\0Î"\0\0¼#\0\0½#\0\0Þ#\0\0ß#\0\0à#\0\0ó#\0\0ô#\0\0\ 2%\0\0\ 3%\0\0%%\0\0&%\0\0úíàúÜÔÜÔÜÔÜÊܺܺܺܺÜÔܯԩÔܟܟܟܟܟܟܟܟÜÔܔԩÔÜÔ܉\15\ 2\b\ 3\0\0\ 6\b\ 1\16hL1]\0U\b\ 1\15\ 2\b\ 3\b\0\0\ 6\b\ 1\16hL1]\0U\b\ 1\13\ 3j\0\0\0\0\16hL1]\00J\15\0U\b\ 1
6\16hL1]\00J\13\0\0\15\ 2\b\ 3j \b\0\0\ 6\b\ 1\16hL1]\0U\b\ 1\1e\16hL1]\0CJ\14\0OJ\ 5\0QJ\ 5\0_Hÿ\0nHÿ\0tHÿ\0\0\12\16hL1]\0CJ\14\0PJ\ 3\0aJ\14\0\0\ f\ 3j\0\0\0\0\16hL1]\0U\b\ 1\ 6\16hL1]\0\0\19\16hL1]\05\bCJ(\0OJ\ 4\0PJ\ 3\0QJ\ 4\0\19\16hL1]\05\bCJH\0OJ\ 4\0PJ\ 3\0QJ\ 4\0
7\16hL1]\0PJ\ 3\06\0\b\0\0\ 1\b\0\0\19\b\0\0/\b\0\00\b\0\0A\b\0\0\\b\0\0l\b\0\0…\b\0\0›\b\0\0¹\b\0\0Ï\b\0\0æ\b\0\0ý\b\0\0% \0\0D \0\0e \0\0\7f \0\0 \0\0µ \0\0à \0\0Ð \0\0à \0\0ð \0\0þ \0\0\1a
8\0\0ý\0\0\0\0\0\0\0\0\0\0\0\0ö\0\0\0\0\0\0\0\0\0\0\0\0ö\0\0\0\0\0\0\0\0\0\0\0\0ý\0\0\0\0\0\0\0\0\0\0\0\0í\0\0\0\0\0\0\0\0\0\0\0\0ç\0\0\0\0\0\0\0\0\0\0\0\0ç\0\0\0\0\0\0\0\0\0\0\0\0á\0\0\0\0\0\0\0\0\0\0\0\0á\0\0\0\0\0\0\0\0\0\0\0\0á\0\0\0\0\0\0\0\0\0\0\0\0ç\0\0\0\0\0\0\0\0\0\0\0\0á\0\0\0\0\0\0\0\0\0\0\0\0á\0\0\0\0\0\0\0\0\0\0\0\0á\0\0\0\0\0\0\0\0\0\0\0\0Û\0\0\0\0\0\0\0\0\0\0\0\0Û\0\0\0\0\0\0\0\0\0\0\0\0Û\0\0\0\0\0\0\0\0\0\0\0\0Û\0\0\0\0\0\0\0\0\0\0\0\0á\0\0\0\0\0\0\0\0\0\0\0\0Û\0\0\0\0\0\0\0\0\0\0\0\0Û\0\0\0\0\0\0\0\0\0\0\0\0Û\0\0\0\0\0\0\0\0\0\0\0\0Û\0\0\0\0\0\0\0\0\0\0\0\0Û\0\0\0\0\0\0\0\0\0\0\0\0Û\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 5h\0\rÆ\ 5\0\ 1;\1e
9\0\ 5g\0\rÆ\ 5\0\ 1;\1e
10\0\ 5f\0\rÆ\ 5\0\ 1;\1e
11\0\b\ 1\0\rÆ\ 5\0\ 1Êÿ\0gdL1]\0\0\ 6q\0\ 3$\ 1\13¤Ð\aa$\ 1\0\ 1q\0\0\19\1a
20\0\0\14\v\0\0?\v\0\0X\v\0\0s\v\0\0ˆ\v\0\0©\v\0\0Ä\v\0\0Ó\v\0\0è\v\0\0\1e\f\0\05\f\0\0p\f\0\0Š\f\0\0š\f\0\0º\f\0\0Ø\f\0\0ô\f\0\0\11\r\0\0/\r\0\0V\r\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 5g\0\rÆ\ 5\0\ 1;\1e
21\0\ 5h\0\rÆ\ 5\0\ 1;\1e
22\0\1cV\r\0\0\r\0\0 \r\0\0¾\r\0\0Ü\r\0\0\11\ e\0\0 \ e\0\05\ e\0\0K\ e\0\0l\ e\0\0”\ e\0\0¨\ e\0\0É\ e\0\0è\ e\0\0*\ f\0\0W\ f\0\0’\ f\0\0¾\ f\0\0ö\ f\0\0'\10\0\0Y\10\0\0p\10\0\0„\10\0\0¡\10\0\0ë\10\0\0&\11\0\0d\11\0\0œ\11\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0í\0\0\0\0\0\0\0\0\0\0\0\0í\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0í\0\0\0\0\0\0\0\0\0\0\0\0í\0\0\0\0\0\0\0\0\0\0\0\0í\0\0\0\0\0\0\0\0\0\0\0\0í\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0í\0\0\0\0\0\0\0\0\0\0\0\0í\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0í\0\0\0\0\0\0\0\0\0\0\0\0í\0\0\0\0\0\0\0\0\0\0\0\0í\0\0\0\0\0\0\0\0\0\0\0\0í\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 5h\0\rÆ\ 5\0\ 1;\1e
23\0\ 5g\0\rÆ\ 5\0\ 1;\1e
24\0\ 5f\0\rÆ\ 5\0\ 1;\1e
25\0\eœ\11\0\0Ó\11\0\0\ f\12\0\01\12\0\0Q\12\0\0j\12\0\0\7f\12\0\0•\12\0\0¹\12\0\0ì\12\0\0\1c\13\0\0J\13\0\0i\13\0\0‹\13\0\0½\13\0\0ö\13\0\0\1d\14\0\0A\14\0\0^\14\0\0q\14\0\0\14\0\0«\14\0\0Ò\14\0\0õ\14\0\0\ f\15\0\0\1f\15\0\03\15\0\0Y\15\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0í\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 5f\0\rÆ\ 5\0\ 1;\1e
26\0\ 5h\0\rÆ\ 5\0\ 1;\1e
27\0\ 5g\0\rÆ\ 5\0\ 1;\1e
28\0\eY\15\0\0s\15\0\0Ÿ\15\0\0Â\15\0\0å\15\0\0\10\16\0\0&\16\0\0I\16\0\0m\16\0\0ƒ\16\0\0—\16\0\0¼\16\0\0Ø\16\0\0\15\17\0\09\17\0\0]\17\0\0†\17\0\0¬\17\0\0Û\17\0\0û\17\0\0\e\18\0\0@\18\0\0b\18\0\0r\18\0\0€\18\0\0•\18\0\0Ó\18\0\0\0\19\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0í\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 5f\0\rÆ\ 5\0\ 1;\1e
29\0\ 5h\0\rÆ\ 5\0\ 1;\1e
30\0\ 5g\0\rÆ\ 5\0\ 1;\1e
31\0\e\0\19\0\0-\19\0\0y\19\0\0Ÿ\19\0\0Å\19\0\0à\19\0\0\ f\1a\0\0;\1a\0\0y\1a\0\0¥\1a\0\0\1a\0\0Ä\1a\0\0Õ\1a\0\0Ö\1a\0\0ã\1a\0\0ý\1a\0\0\11\e\0\0U\e\0\0f\e\0\0{\e\0\0”\e\0\0•\e\0\0—\e\0\0›\e\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ó\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0í\0\0\0\0\0\0\0\0\0\0\0\0í\0\0\0\0\0\0\0\0\0\0\0\0ç\0\0\0\0\0\0\0\0\0\0\0\0Þ\0\0\0\0\0\0\0\0\0\0\0\0Ü\0\0\0\0\0\0\0\0\0\0\0\0Ó\0\0\0\0\0\0\0\0\0\0\0\0Ü\0\0\0\0\0\0\0\0\0\0\0\0Ó\0\0\0\0\0\0\0\0\0\0\0\0Ü\0\0\0\0\0\0\0\0\0\0\0\0Ó\0\0\0\0\0\0\0\0\0\0\0\0Ñ\0\0\0\0\0\0\0\0\0\0\0\0Ë\0\0\0\0\0\0\0\0\0\0\0\0Ë\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 6\0\0\16$\ 1If\ 1\0\0\0\0\ 1\0\0\0\b\ 2\0\rÆ\ 5\0\ 1Êÿ\0gdL1]\0\0\ 1r\0\0\b\ 1\0\rÆ\ 5\0\ 1Êÿ\0gdL1]\0\0\ 5\0\0\rÆ\ 5\0\ 1é\ 1\0\0\ 5f\0\rÆ\ 5\0\ 1;\1e
32\0\ 5g\0\rÆ\ 5\0\ 1;\1e
33\0\ 5h\0\rÆ\ 5\0\ 1;\1e
34\0\17›\e\0\0 \e\0\0¥\e\0\0¦\e\0\0¨\e\0\0µ\e\0\0½\e\0\0Ø\e\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0S\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0ù\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0¥\0\0kd\0\0\0\0\16$\ 1\17$\ 1If\ 1\0\0\0\0T\ 1\0\ 34\ 1\bÖ\\0\ 4”ÿ\10\ 1¹\ 6Ã
35Y\16\0\ 6|\ 1\b\ 1\ 1\0\b\ 1\ 1\0\ 1\ 1\ 1\0\0\0\0\0\0\ 6©\ 5\b\ 1\ 1\0\ 1\ 1\ 1\0\ 1\ 1\ 1\0\0\0\0\0\0\ 6
36\ 4\b\ 1\ 1\0\ 1\ 1\ 1\0\ 1\ 1\ 1\0\0\0\0\0\0\ 6–\v\b\ 1\ 1\0\ 1\ 1\ 1\0\ 1\ 1\ 1\0\b\ 1\ 1\0 Ö\b\0\ 1\0\ 1\0\ 1\0\ 1\12Ö(\0\0\0ÿæææ\0\0\0\0\0\0ÿæææ\0\0\0\0\0\0ÿæææ\0\0\0\0\0\0ÿæææ\0\0\0\14ö\ 1\0\0\1aÖ\10\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\eÖ\10\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1cÖ\10\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1dÖ\10\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\0\0\0\0\ 6\0\ 1\ 5\ 3\0\0\ 6\0\ 1
37\ 3l\0\ 3\0\0f4\ 1pÖ(\0\0\0ÿæææ\0\0\0\0\0\0ÿæææ\0\0\0\0\0\0ÿæææ\0\0\0\0\0\0ÿæææ\0\0\0ŠT\ 1\0\ 6\0\0\16$\ 1If\ 1\0\0\0\0\aØ\e\0\0Ù\e\0\0Û\e\0\0è\e\0\0ð\e\0\0\a\1c\0\0k\0\0\0\0\0\0\0\0\0\0\0\0e\0\0\0\0\0\0\0\0\0\0\0\0e\0\0\0\0\0\0\0\0\0\0\0\0e\0\0\0\0\0\0\0\0\0\0\0\0e\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 6\0\0\16$\ 1If\ 1\0\0\0\0“\0\0kd/\ 1\0\0\16$\ 1\17$\ 1If\ 1\0\0\0\0T\ 1\0\ 34\ 1\bÖ\\0\ 4”ÿ\10\ 1¹\ 6Ã
38Y\16\0\ 6|\ 1\0\0\0\0\b\ 1\ 1\0\ 1\ 1\ 1\0\0\0\0\0\0\ 6©\ 5\0\0\0\0\ 1\ 1\ 1\0\ 1\ 1\ 1\0\0\0\0\0\0\ 6
39\ 4\0\0\0\0\ 1\ 1\ 1\0\ 1\ 1\ 1\0\0\0\0\0\0\ 6–\v\0\0\0\0\ 1\ 1\ 1\0\ 1\ 1\ 1\0\b\ 1\ 1\0 Ö\ 2\0\ 1\12Ö
40\0\0\0ÿæææ\0\0\0\14ö\ 1\0\0\1aÖ\10\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\eÖ\10\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1cÖ\10\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1dÖ\10\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\0\0\0\0\ 6\0\ 1\ 5\ 3\0\0\ 6\0\ 1
41\ 3l\0\ 3\0\0f4\ 1pÖ(\0\0\0ÿæææ\0\0\0\0\0\0ÿ\0\0\0ÿ\0\0\0\0\0ÿ\0\0\0ÿ\0\0\0\0\0ÿ\0\0\0ÿ\0\0ŠT\ 1\0\0\ 5\a\1c\0\0\b\1c\0\0
42\1c\0\0\16\1c\0\0\1e\1c\0\0)\1c\0\0k\0\0\0\0\0\0\0\0\0\0\0\0e\0\0\0\0\0\0\0\0\0\0\0\0e\0\0\0\0\0\0\0\0\0\0\0\0e\0\0\0\0\0\0\0\0\0\0\0\0e\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 6\0\0\16$\ 1If\ 1\0\0\0\0“\0\0kd,\ 2\0\0\16$\ 1\17$\ 1If\ 1\0\0\0\0T\ 1\0\ 34\ 1\bÖ\\0\ 4”ÿ\10\ 1¹\ 6Ã
43Y\16\0\ 6|\ 1\0\0\0\0\b\ 1\ 1\0\ 1\ 1\ 1\0\0\0\0\0\0\ 6©\ 5\0\0\0\0\ 1\ 1\ 1\0\ 1\ 1\ 1\0\0\0\0\0\0\ 6
44\ 4\0\0\0\0\ 1\ 1\ 1\0\ 1\ 1\ 1\0\0\0\0\0\0\ 6–\v\0\0\0\0\ 1\ 1\ 1\0\ 1\ 1\ 1\0\b\ 1\ 1\0 Ö\ 2\0\ 1\12Ö
45\0\0\0ÿæææ\0\0\0\14ö\ 1\0\0\1aÖ\10\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\eÖ\10\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1cÖ\10\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1dÖ\10\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\0\0\0\0\ 6\0\ 1\ 5\ 3\0\0\ 6\0\ 1
46\ 3l\0\ 3\0\0f4\ 1pÖ(\0\0\0ÿæææ\0\0\0\0\0\0ÿ\0\0\0ÿ\0\0\0\0\0ÿ\0\0\0ÿ\0\0\0\0\0ÿ\0\0\0ÿ\0\0ŠT\ 1\0\0\ 5)\1c\0\0*\1c\0\0,\1c\0\05\1c\0\0=\1c\0\0H\1c\0\0k\0\0\0\0\0\0\0\0\0\0\0\0b\0\0\0\0\0\0\0\0\0\0\0\0\\0\0\0\0\0\0\0\0\0\0\0\0S\0\0\0\0\0\0\0\0\0\0\0\0\\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 \0\0\16$\ 1If\ 1\0\0\0gdL1]\0\ 6\0\0\16$\ 1If\ 1\0\0\0 \0\0\ 3$\ 2\16$\ 1If\ 1\0\0\0a$\ 2\0“\0\0kd)\ 3\0\0\16$\ 1\17$\ 1If\ 1\0\0\0\0T\ 1\0\ 34\ 1\bÖ\\0\ 4”ÿ\10\ 1¹\ 6Ã
47Y\16\0\ 6|\ 1\0\0\0\0\b\ 1\ 1\0\b\ 1\ 1\0\0\0\0\0\0\ 6©\ 5\0\0\0\0\ 1\ 1\ 1\0\b\ 1\ 1\0\0\0\0\0\0\ 6
48\ 4\0\0\0\0\ 1\ 1\ 1\0\b\ 1\ 1\0\0\0\0\0\0\ 6–\v\0\0\0\0\ 1\ 1\ 1\0\b\ 1\ 1\0\b\ 1\ 1\0 Ö\ 2\0\ 1\12Ö
49\0\0\0ÿæææ\0\0\0\14ö\ 1\0\0\1aÖ\10\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\eÖ\10\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1cÖ\10\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1dÖ\10\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\0\0\0\0\ 6\0\ 1\ 5\ 3\0\0\ 6\0\ 1
50\ 3l\0\ 3\0\0f4\ 1pÖ(\0\0\0ÿæææ\0\0\0\0\0\0ÿ\0\0\0ÿ\0\0\0\0\0ÿ\0\0\0ÿ\0\0\0\0\0ÿ\0\0\0ÿ\0\0ŠT\ 1\0\0\ 5H\1c\0\0I\1c\0\0K\1c\0\0U\1c\0\0]\1c\0\0j\1c\0\0k\0\0\0\0\0\0\0\0\0\0\0\0b\0\0\0\0\0\0\0\0\0\0\0\0\\0\0\0\0\0\0\0\0\0\0\0\0S\0\0\0\0\0\0\0\0\0\0\0\0\\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 \0\0\16$\ 1If\ 1\0\0\0gdL1]\0\ 6\0\0\16$\ 1If\ 1\0\0\0 \0\0\ 3$\ 2\16$\ 1If\ 1\0\0\0a$\ 2\0“\0\0kd&\ 4\0\0\16$\ 1\17$\ 1If\ 1\0\0\0\0T\ 1\0\ 34\ 1\bÖ\\0\ 4”ÿ\10\ 1¹\ 6Ã
51Y\16\0\a|\ 1\0\0\0\0\b\ 1\ 1\0\0\0\0\0\0\0\0\0\0\ 6©\ 5\0\0\0\0\ 1\ 1\ 1\0\0\0\0\0\0\0\0\0\0\a
52\ 4\0\0\0\0\ 1\ 1\ 1\0\0\0\0\0\0\0\0\0\0\ 6–\v\0\0\0\0\ 1\ 1\ 1\0\0\0\0\0\b\ 1\ 1\0 Ö\ 2\0\ 1\12Ö
53\0\0\0ÿæææ\0\0\0\14ö\ 1\0\0\1aÖ\10\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\eÖ\10\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1cÖ\10\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\1dÖ\10\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\0\0\0\0\ 6\0\ 1\ 5\ 3\0\0\ 6\0\ 1
54\ 3l\0\ 3\0\0f4\ 1pÖ(\0\0\0ÿæææ\0\0\0\0\0\0ÿ\0\0\0ÿ\0\0\0\0\0ÿ\0\0\0ÿ\0\0\0\0\0ÿ\0\0\0ÿ\0\0ŠT\ 1\0\0\ 5j\1c\0\0k\1c\0\0m\1c\0\0w\1c\0\0\7f\1c\0\0–\1c\0\0k\0\0\0\0\0\0\0\0\0\0\0\0b\0\0\0\0\0\0\0\0\0\0\0\0\\0\0\0\0\0\0\0\0\0\0\0\0S\0\0\0\0\0\0\0\0\0\0\0\0\\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 \0\0\16$\ 1If\ 1\0\0\0gdL1]\0\ 6\0\0\16$\ 1If\ 1\0\0\0 \0\0\ 3$\ 2\16$\ 1If\ 1\0\0\0a$\ 2\0“\0\0kd!\ 5\0\0\16$\ 1\17$\ 1If\ 1\0\0\0\0T\ 1\0\ 34\ 1\bÖ\\0\ 4”ÿ\10\ 1¹\ 6Ã
55Y\16\0\a|\ 1\0\0\0\0\b\ 1\ 1\0\0\0\0\0\0\0\0\0\0\ 6©\ 5\0\0\0\0\ 1\ 1\ 1\0\0\0\0\0\0\0\0\0\0\a
56\ 4\0\0\0\0\ 1\ 1\ 1\0\0\0\0\0\0\0\0\0\0\ 6–\v\0\0\0\0\ 1\ 1\ 1\0\0\0\0\0\b\ 1\ 1\0 Ö\ 2\0\ 1\12Ö
57\0\0\0ÿæææ\0\0\0\14ö\ 1\0\0\1aÖ\10\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\eÖ\10\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1cÖ\10\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\1dÖ\10\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\0\0\0\0\ 6\0\ 1\ 5\ 3\0\0\ 6\0\ 1
58\ 3l\0\ 3\0\0f4\ 1pÖ(\0\0\0ÿæææ\0\0\0\0\0\0ÿ\0\0\0ÿ\0\0\0\0\0ÿ\0\0\0ÿ\0\0\0\0\0ÿ\0\0\0ÿ\0\0ŠT\ 1\0\0\ 5–\1c\0\0—\1c\0\0™\1c\0\0£\1c\0\0ª\1c\0\0·\1c\0\0k\0\0\0\0\0\0\0\0\0\0\0\0b\0\0\0\0\0\0\0\0\0\0\0\0\\0\0\0\0\0\0\0\0\0\0\0\0S\0\0\0\0\0\0\0\0\0\0\0\0\\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 \0\0\16$\ 1If\ 1\0\0\0gdL1]\0\ 6\0\0\16$\ 1If\ 1\0\0\0 \0\0\ 3$\ 2\16$\ 1If\ 1\0\0\0a$\ 2\0“\0\0kd\1c\ 6\0\0\16$\ 1\17$\ 1If\ 1\0\0\0\0T\ 1\0\ 34\ 1\bÖ\\0\ 4”ÿ\10\ 1¹\ 6Ã
59Y\16\0\a|\ 1\0\0\0\0\b\ 1\ 1\0\0\0\0\0\0\0\0\0\0\ 6©\ 5\0\0\0\0\ 1\ 1\ 1\0\0\0\0\0\0\0\0\0\0\a
60\ 4\0\0\0\0\ 1\ 1\ 1\0\0\0\0\0\0\0\0\0\0\ 6–\v\0\0\0\0\ 1\ 1\ 1\0\0\0\0\0\b\ 1\ 1\0 Ö\ 2\0\ 1\12Ö
61\0\0\0ÿæææ\0\0\0\14ö\ 1\0\0\1aÖ\10\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\eÖ\10\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1cÖ\10\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\1dÖ\10\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\0\0\0\0\ 6\0\ 1\ 5\ 3\0\0\ 6\0\ 1
62\ 3l\0\ 3\0\0f4\ 1pÖ(\0\0\0ÿæææ\0\0\0\0\0\0ÿ\0\0\0ÿ\0\0\0\0\0ÿ\0\0\0ÿ\0\0\0\0\0ÿ\0\0\0ÿ\0\0ŠT\ 1\0\0\ 5·\1c\0\0¸\1c\0\0¹\1c\0\0Ì\1c\0\0Í\1c\0\0ß\1c\0\0z\1e\0\0,\1f\0\0¦\1f\0\0§ \0\0k\0\0\0\0\0\0\0\0\0\0\0\0i\0\0\0\0\0\0\0\0\0\0\0\0`\0\0\0\0\0\0\0\0\0\0\0\0i\0\0\0\0\0\0\0\0\0\0\0\0W\0\0\0\0\0\0\0\0\0\0\0\0U\0\0\0\0\0\0\0\0\0\0\0\0U\0\0\0\0\0\0\0\0\0\0\0\0U\0\0\0\0\0\0\0\0\0\0\0\0U\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ 1r\0\0\b\ 2\0\rÆ\ 5\0\ 1Êÿ\0gdL1]\0\0\b\ 1\0\rÆ\ 5\0\ 1Êÿ\0gdL1]\0\0\ 1\0\0\0“\0\0kd\17\a\0\0\16$\ 1\17$\ 1If\ 1\0\0\0\0T\ 1\0\ 34\ 1\bÖ\\0\ 4”ÿ\10\ 1¹\ 6Ã
63Y\16\0\a|\ 1\0\0\0\0\b\ 1\ 1\0\b\ 1\ 1\0\0\0\0\0\0\ 6©\ 5\0\0\0\0\ 1\ 1\ 1\0\b\ 1\ 1\0\0\0\0\0\0\a
64\ 4\0\0\0\0\ 1\ 1\ 1\0\b\ 1\ 1\0\0\0\0\0\0\ 6–\v\0\0\0\0\ 1\ 1\ 1\0\b\ 1\ 1\0\b\ 1\ 1\0 Ö\ 2\0\ 1\12Ö
65\0\0\0ÿæææ\0\0\0\14ö\ 1\0\0\1aÖ\10\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\eÖ\10\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1cÖ\10\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1dÖ\10\0\0\0ÿ\0\0\0ÿ\0\0\0ÿ\0\0\0\0\ 6\0\ 1\ 5\ 3\0\0\ 6\0\ 1
66\ 3l\0\ 3\0\0f4\ 1pÖ(\0\0\0ÿæææ\0\0\0\0\0\0ÿ\0\0\0ÿ\0\0\0\0\0ÿ\0\0\0ÿ\0\0\0\0\0ÿ\0\0\0ÿ\0\0ŠT\ 1\0\0 § \0\0\1e!\0\0ˆ"\0\0M#\0\0§#\0\05$\0\0e$\0\0†$\0\0š$\0\0Ì$\0\0=%\0\0•%\0\0#&\0\0v&\0\0ˆ&\0\0à'\0\0â'\0\0 (\0\0Ô+\0\0V-\0\0œ0\0\0Ü0\0\0ø0\0\0„3\0\0ý\0\0\0\0\0\0\0\0\0\0\0\0ý\0\0\0\0\0\0\0\0\0\0\0\0ý\0\0\0\0\0\0\0\0\0\0\0\0ý\0\0\0\0\0\0\0\0\0\0\0\0î\0\0\0\0\0\0\0\0\0\0\0\0î\0\0\0\0\0\0\0\0\0\0\0\0å\0\0\0\0\0\0\0\0\0\0\0\0î\0\0\0\0\0\0\0\0\0\0\0\0Ö\0\0\0\0\0\0\0\0\0\0\0\0Ö\0\0\0\0\0\0\0\0\0\0\0\0Ö\0\0\0\0\0\0\0\0\0\0\0\0ý\0\0\0\0\0\0\0\0\0\0\0\0ý\0\0\0\0