1 \documentclass[12pt,a4paper,twoside]{article}
2 \usepackage[margin=2cm]{geometry}
9 \newcommand{\class}[1]{\texttt{\textbf{#1}}\xspace}
10 \newcommand{\method}[1]{\texttt{#1}\xspace}
11 \renewcommand{\rmdefault}{ptm}
14 % ---------------------------------------------------------------
15 % define new commands/symbols
16 % ---------------------------------------------------------------
25 \hyphenation{da-ta-ba-ses}
27 \newcommand{\pt}{\ensuremath{p_{\mathrm{t}}}}
28 \newcommand{\et}{\ensuremath{E_{\mathrm{T}}}}
29 \newcommand {\pT} {\mbox{$p_{\rm t}$}}
30 \newcommand{\mt}{\ensuremath{m_{\mathrm{t}}}}
31 \newcommand {\grid} {Grid\@\xspace}
32 \newcommand {\MC} {Monte~Carlo\@\xspace}
33 \newcommand {\alien} {AliEn\@\xspace}
34 \newcommand {\pp} {\mbox{p--p}\@\xspace}
35 \newcommand {\pA} {\mbox{p--A}\@\xspace}
36 \newcommand {\PbPb} {\mbox{Pb--Pb}\@\xspace}
37 \newcommand {\aliroot} {AliRoot\@\xspace}
38 \newcommand {\ROOT} {ROOT\@\xspace}
39 \newcommand {\OO} {Object-Oriented\@\xspace}
41 \newcommand{\mrm}{\mathrm}
42 \newcommand{\dd}{\mrm{d}}
43 \newcommand{\elm}{e.m.\@\xspace}
44 \newcommand{\eg}{{e.g.~\@\xspace}}
45 \newcommand{\ie}{i.e.\@\xspace}
46 \newcommand{\Jpsi} {\mbox{J\kern-0.05em /\kern-0.05em$\psi$}\xspace}
47 \newcommand{\psip} {\mbox{$\psi^\prime$}\xspace}
48 \newcommand{\Ups} {\mbox{$\Upsilon$}\xspace}
49 \newcommand{\Upsp} {\mbox{$\Upsilon^\prime$}\xspace}
50 \newcommand{\Upspp} {\mbox{$\Upsilon^{\prime\prime}$}\xspace}
51 \newcommand{\qqbar} {\mbox{$q\bar{q}$}\xspace}
53 \newcommand {\grad} {\mbox{$^{\circ}$}}
55 \newcommand {\rap} {\mbox{$\left | y \right | $}}
56 \newcommand {\mass} {\mbox{\rm GeV$\kern-0.15em /\kern-0.12em c^2$}}
57 \newcommand {\tev} {\mbox{${\rm TeV}$}}
58 \newcommand {\gev} {\mbox{${\rm GeV}$}}
59 \newcommand {\mev} {\mbox{${\rm MeV}$}}
60 \newcommand {\kev} {\mbox{${\rm keV}$}}
61 \newcommand {\mom} {\mbox{\rm GeV$\kern-0.15em /\kern-0.12em c$}}
62 \newcommand {\mum} {\mbox{$\mu {\rm m}$}}
63 \newcommand {\gmom} {\mbox{\rm GeV$\kern-0.15em /\kern-0.12em c$}}
64 \newcommand {\mmass} {\mbox{\rm MeV$\kern-0.15em /\kern-0.12em c^2$}}
65 \newcommand {\mmom} {\mbox{\rm MeV$\kern-0.15em /\kern-0.12em c$}}
66 \newcommand {\nb} {\mbox{\rm nb}}
67 \newcommand {\musec} {\mbox{$\mu {\rm s}$}}
68 \newcommand {\cmq} {\mbox{${\rm cm}^{2}$}}
69 \newcommand {\cm} {\mbox{${\rm cm}$}}
70 \newcommand {\mm} {\mbox{${\rm mm}$}}
71 \newcommand {\dens} {\mbox{${\rm g}\,{\rm cm}^{-3}$}}
73 \lstset{ % general command to set parameter(s)
74 % basicstyle=\small, % print whole listing small
75 basicstyle=\ttfamily, % print whole listing monospace
76 keywordstyle=\bfseries, % bold black keywords
77 identifierstyle=, % identifiers in italic
78 commentstyle=\itshape, % white comments in italic
79 stringstyle=\ttfamily, % typewriter type for strings
80 showstringspaces=false, % no special string spaces
81 columns=fullflexible, % Flexible columns
82 xleftmargin=2em, % Extra margin, left
83 xrightmargin=2em, % Extra margin, right
84 numbers=left, % Line numbers on the left
85 numberfirstline=true, % First line numbered
86 firstnumber=1, % Always start at 1
87 stepnumber=5, % Every fifth line
88 numberstyle=\footnotesize\itshape, % Style of line numbers
89 frame=lines} % Lines above and below listings
92 % ---------------------------------------------------------
93 % - End of Definitions
94 % ---------------------------------------------------------
98 \title{AliRoot Primer}
99 \author{Editor P.Hristov}
100 \date{Version v4-05-06 \\
108 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
110 \section{Introduction}\label{Introduction}
112 % -----------------------------------------------------------------------------
115 \subsection{About this primer}
117 The aim of this primer is to give some basic information about the
118 ALICE offline framework (AliRoot) from users perspective. We explain
119 in detail the installation procedure, and give examples of some
120 typical use cases: detector description, event generation, particle
121 transport, generation of ``summable digits'', event merging,
122 reconstruction, particle identification, and generation of event
123 summary data. The primer also includes some examples of analysis, and
124 short description of the existing analysis classes in AliRoot. An
125 updated version of the document can be downloaded from
126 \url{http://aliceinfo.cern.ch/Offline/AliRoot/primer.html}.
128 For the reader interested by the AliRoot architecture and by the
129 performance studies done so far, a good starting point is Chapter 4 of
130 the ALICE Physics Performance Report\cite{PPR}. Another important
131 document is the ALICE Computing Technical Design Report\cite{CompTDR}.
132 Some information contained there has been included in the present
133 document, but most of the details have been omitted.
135 AliRoot uses the ROOT\cite{ROOT} system as a foundation on which the
136 framework for simulation, reconstruction and analysis is built. The
137 transport of the particles through the detector is carried on by the
138 Geant3\cite{Geant3} or FLUKA\cite{FLUKA} packages. Support for
139 Geant4\cite{Geant4} transport package is coming soon.
141 Except for large existing libraries, such as Pythia6\cite{MC:PYTH} and
142 HIJING\cite{MC:HIJING}, and some remaining legacy code, this framework
143 is based on the Object Oriented programming paradigm, and it is
146 The following packages are needed to install the fully operational
147 software distribution:
149 \item ROOT, available from \url{http://root.cern.ch}
150 or using the ROOT CVS repository
152 :pserver:cvs@root.cern.ch:/user/cvs
154 \item AliRoot from the ALICE offline CVS repository
156 :pserver:cvs@alisoft.cern.ch:/soft/cvsroot
158 \item transport packages:
160 \item GEANT~3 is available from the ROOT CVS repository
161 \item FLUKA library can
162 be obtained after registration from \url{http://www.fluka.org}
163 \item GEANT~4 distribution from \url{http://cern.ch/geant4}.
167 The access to the GRID resources and data is provided by the
168 AliEn\cite{AliEn} system.
170 The installation details are explained in Section \ref{Installation}.
172 \subsection{AliRoot framework}\label{AliRootFramework}
174 In HEP, a framework is a set of software tools that enables data
175 processing. For example the old CERN Program Library was a toolkit to
176 build a framework. PAW was the first example of integration of tools
177 into a coherent ensemble specifically dedicated to data analysis. The
178 role of the framework is shown in Fig.~\ref{MC:Parab}.
182 \includegraphics[width=10cm]{picts/Parab}
183 \caption{Data processing framework.} \label{MC:Parab}
186 The primary interactions are simulated via event generators, and the
187 resulting kinematic tree is then used in the transport package. An
188 event generator produces set of ``particles'' with their momenta. The
189 set of particles, where one maintains the production history (in form
190 of mother-daughter relationship and production vertex) forms the
191 kinematic tree. More details can be found in the ROOT documentation of
192 class \class{TParticle}. The transport package transports the
193 particles through the set of detectors, and produces \textbf{hits},
194 which in ALICE terminology means energy deposition at a given
195 point. The hits contain also information (``track labels'') about the
196 particles that have generated them. In case of calorimeters (PHOS and
197 EMCAL) the hit is the energy deposition in the whole active volume of
198 a detecting element. In some detectors the energy of the hit is used
199 only for comparison with a given threshold, for example in TOF and ITS
202 At the next step the detector response is taken into account, and the
203 hits are transformed into \textbf{digits}. As it was explained above,
204 the hits are closely related to the tracks which generated them. The
205 transition from hits/tracks to digits/detectors is marked on the
206 picture as ``disintegrated response'', the tracks are
207 ``disintegrated'' and only the labels carry the \MC information.
208 There are two types of digits: \textbf{summable digits}, where one
209 uses low thresholds and the result is additive, and {\bf digits},
210 where the real thresholds are used, and result is similar to what one
211 would get in the real data taking. In some sense the {\bf summable
212 digits} are precursors of the \textbf{digits}. The noise simulation is
213 activated when \textbf{digits} are produced. There are two differences
214 between the \textbf{digits} and the \textbf{raw} data format produced
215 by the detector: firstly, the information about the \MC particle
216 generating the digit is kept as data member of the class
217 \class{AliDigit}, and secondly, the raw data are stored in binary
218 format as ``payload'' in a ROOT structure, while the digits are stored
219 in ROOT classes. Two conversion chains are provided in AliRoot:
220 \textbf{hits} $\to$ \textbf{summable digits} $\to$ \textbf{digits},
221 and \textbf{hits} $\to$ \textbf{digits}. The summable digits are used
222 for the so called ``event merging'', where a signal event is embedded
223 in a signal-free underlying event. This technique is widely used in
224 heavy-ion physics and allows to reuse the underlying events with
225 substantial economy of computing resources. Optionally it is possible
226 to perform the conversion \textbf{digits} $\to$ \textbf{raw data},
227 which is used to estimate the expected data size, to evaluate the high
228 level trigger algorithms, and to carry on the so called computing data
229 challenges. The reconstruction and the HLT algorithms can work both
230 with \textbf{digits} or with \textbf{raw data}. There is also the
231 possibility to convert the \textbf{raw data} between the following
232 formats: the format coming form the front-end electronics (FEE)
233 through the detector data link (DDL), the format used in the data
234 acquisition system (DAQ), and the ``rootified'' format. More details
235 are given in section \ref{Simulation}.
237 After the creation of digits, the reconstruction and analysis chain
238 can be activated to evaluate the software and the detector
239 performance, and to study some particular signatures. The
240 reconstruction takes as input digits or raw data, real or simulated.
241 The user can intervene into the cycle provided by the framework to
242 replace any part of it with his own code or implement his own analysis
243 of the data. I/O and user interfaces are part of the framework, as are
244 data visualization and analysis tools and all procedures that are
245 considered of general enough interest to be introduced into the
246 framework. The scope of the framework evolves with time as the needs
247 and understanding of the physics community evolve.
249 The basic principles that have guided the design of the AliRoot
250 framework are re-usability and modularity. There are almost as many
251 definitions of these concepts as there are programmers. However, for
252 our purpose, we adopt an operative heuristic definition that expresses
253 our objective to minimize the amount of unused or rewritten code and
254 maximize the participation of the physicists in the development of the
257 \textbf{Modularity} allows replacement of parts of our system with
258 minimal or no impact on the rest. Not every part of our system is
259 expected to be replaced. Therefore we are aiming at modularity
260 targeted to those elements that we expect to change. For example, we
261 require the ability to change the event generator or the transport \MC
262 without affecting the user code. There are elements that we do not
263 plan to interchange, but rather to evolve in collaboration with their
264 authors such as the ROOT I/O subsystem or the ROOT User Interface
265 (UI), and therefore no effort is made to make our framework modular
266 with respect to these. Whenever an element has to be modular in the
267 sense above, we define an abstract interface to it. The codes from the
268 different detectors are independent so that different detector groups
269 can work concurrently on the system while minimizing the
270 interference. We understand and accept the risk that at some point the
271 need may arise to make modular a component that was not designed to
272 be. For these cases, we have elaborated a development strategy that
273 can handle design changes in production code.
275 \textbf{Re-usability} is the protection of the investment made by the
276 programming physicists of ALICE. The code embodies a large scientific
277 knowledge and experience and is thus a precious resource. We preserve
278 this investment by designing a modular system in the sense above and
279 by making sure that we maintain the maximum amount of backward
280 compatibility while evolving our system. This naturally generates
281 requirements on the underlying framework prompting developments such
282 as the introduction of automatic schema evolution in ROOT.
284 The \textbf{support} of the AliRoot framework is a collaborative effort
285 within the ALICE experiment. Question, suggestions, topics for
286 discussion and messages are exchanged in the mailing list
287 \url{alice-off@cern.ch}. Bug reports and tasks are submitted on the
288 Savannah page \url{http://savannah.cern.ch/projects/aliroot/}.
290 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
294 \section{Installation and development tools}\label{Installation}
296 % -----------------------------------------------------------------------------
298 \subsection{Platforms and compilers}
300 The main development and production platform is Linux on Intel 32 bits
301 processors. The official Linux\cite{Linux} distribution at CERN is
302 Scientific Linux SLC\cite{SLC}. The code works also on
303 RedHat\cite{RedHat} version 7.3, 8.0, 9.0, Fedora Core\cite{Fedora} 1
304 -- 5, and on many other Linux distributions. The main compiler on
305 Linux is gcc\cite{gcc}: the recommended version is gcc 3.2.3 --
306 3.4.6. The older releases (2.91.66, 2.95.2, 2.96) have problems in the
307 FORTRAN optimization which has to be switched off for all the FORTRAN
308 packages. AliRoot can be used with gcc 4.0.X where the FORTRAN
309 compiler g77 is replaced by g95. The last release series of gcc (4.1)
310 work with gfortran as well. As an option you can use Intel
311 icc\cite{icc} compiler, which is supported as well. You can download
312 it from \url{http://www.intel.com} and use it free of charge for
313 non-commercial projects. Intel also provides free of charge the
314 VTune\cite{VTune} profiling tool which is really one of the best
317 AliRoot is supported on Intel 64 bit processors
318 (Itanium\cite{Itanium}) running Linux. Both the gcc and Intel icc
319 compilers can be used.
321 On 64 bit AMD\cite{AMD} processors such as Opteron AliRoot runs
322 successfully with the gcc compiler.
324 The software is also regularly compiled and run on other Unix
325 platforms. On Sun (SunOS 5.8) we recommend the CC compiler Sun
326 WorkShop 6 update 1 C++ 5.2. The WorkShop integrates nice debugging
327 and profiling facilities which are very useful for code development.
329 On Compaq alpha server (Digital Unix V4.0) the default compiler is cxx
330 ( Compaq C++ V6.2-024 for Digital UNIX V4.0F). Alpha provides also its
331 profiling tool pixie, which works well with shared libraries. AliRoot
332 works also on alpha server running Linux, where the compiler is gcc.
334 Recently AliRoot was ported to MacOS (Darwin). This OS is very
335 sensitive to the circular dependences in the shared libraries, which
336 makes it very useful as test platform.
338 % -----------------------------------------------------------------------------
340 \subsection{Essential CVS information}
342 CVS\cite{CVS} stands for Concurrent Version System. It permits to a
343 group of people to work simultaneously on groups of files (for
344 instance program sources). It also records the history of files, which
345 allows back tracking and file versioning. The official CVS Web page is
346 \url{http://www.cvshome.org/}. CVS has a host of features, among them
347 the most important are:
349 \item CVS facilitates parallel and concurrent code development;
350 \item it provides easy support and simple access;
351 \item it has possibility to establish group permissions (for example
352 only detector experts and CVS administrators can commit code to
353 given detector module).
355 CVS has rich set of commands, the most important are described below.
356 There exist several tools for visualization, logging and control which
357 work with CVS. More information is available in the CVS documentation
358 and manual\cite{CVSManual}.
360 Usually the development process with CVS has the following features:
362 \item all developers work on their \underline{own} copy of the project
363 (in one of their directories)
364 \item they often have to \underline{synchronize} with a global
365 repository both to update with modifications from other people and
366 to commit their own changes.
369 Here below we give an example of a typical CVS session
371 \begin{lstlisting}[language=sh]
372 # Login to the repository. The password is stored in ~/.cvspass
373 # If no cvs logout is done, the password remains there and
374 # one can access the repository without new login
375 % cvs -d :pserver:hristov@alisoft.cern.ch:/soft/cvsroot login
376 (Logging in to hristov@alisoft.cern.ch)
380 # Check-Out a local version of the TPC module
381 % cvs -d :pserver:hristov@alisoft.cern.ch:/soft/cvsroot checkout TPC
382 cvs server: Updating TPC
389 # compile and test modifications
391 # Commit your changes to the repository with an appropriate comment
392 % cvs commit -m "add include file xxx.h" AliTPC.h
393 Checking in AliTPC.h;
394 /soft/cvsroot/AliRoot/TPC/AliTPC.h,v <-- AliTPC.h
395 new revision: 1.9; previous revision:1.8
400 Instead of specifying the repository and user name by -d option, one
401 can export the environment variable CVSROOT, for example
403 \begin{lstlisting}[language=sh]
404 % export CVSROOT=:pserver:hristov@alisoft.cern.ch:/soft/cvsroot
407 Once the local version has been checked out, inside the directory tree
408 the CVSROOT is not needed anymore. The name of the actual repository
409 can be found in CVS/Root file. This name can be redefined again using
412 In case somebody else has committed some changes in AliTPC.h file, the
413 developer have to update the local version merging his own changes
414 before committing them:
416 \begin{lstlisting}[language=sh]
417 % cvs commit -m "add include file xxx.h" AliTPC.h
418 cvs server: Up-to-date check failed for `AliTPC.h'
419 cvs [server aborted]: correct above errors first!
422 cvs server: Updating .
423 RCS file: /soft/cvsroot/AliRoot/TPC/AliTPC.h,v
424 retrieving revision 1.9
425 retrieving revision 1.10
426 Merging differences between 1.9 and 1.10 into AliTPC.h
429 # edit, compile and test modifications
431 % cvs commit -m "add include file xxx.h" AliTPC.h
432 Checking in AliTPC.h;
433 /soft/cvsroot/AliRoot/TPC/AliTPC.h,v <-- AliTPC.h
434 new revision: 1.11; previous revision: 1.10
438 \textbf{Important note:} CVS performs a purely mechanical merging, and
439 it is the developer's to verify the result of this operation. It is
440 especially true in case of conflicts, when the CVS tool is not able to
441 merge the local and remote modifications consistently.
444 \subsection{Main CVS commands}
446 In the following examples we suppose that the CVSROOT environment
447 variable is set, as it was shown above. In case a local version has
448 been already checked out, the CVS repository is defined automatically
449 inside the directory tree.
452 \item\textbf{login} stores password in .cvspass. It is enough to login
453 once to the repository.
455 \item\textbf{checkout} retrieves the source files of AliRoot version v4-04-Rev-08
456 \begin{lstlisting}[language=sh]
457 % cvs co -r v4-04-Rev-08 AliRoot
460 \item\textbf{update} retrieves modifications from the repository and
461 merges them with the local ones. The -q option reduces the verbose
462 output, and the -z9 sets the compression level during the data
463 transfer. The option -A removes all the ``sticky'' tags, -d removes
464 the obsolete files from the local distribution, and -P retrieves the
465 new files which are missing from the local distribution. In this way
466 the local distribution will be updated to the latest code from the
467 main development branch.
468 \begin{lstlisting}[language=sh]
469 % cvs -qz9 update -AdP STEER
472 \item\textbf{diff} shows differences between the local and repository
473 versions of the whole module STEER
474 \begin{lstlisting}[language=sh]
475 % cvs -qz9 diff STEER
478 \item \textbf{add} adds files or directories to the repository. The
479 actual transfer is done when the commit command is invoked.
480 \begin{lstlisting}[language=sh]
481 % cvs -qz9 add AliTPCseed.*
484 \item\textbf{remove} removes old files or directories from the
485 repository. The -f option forces the removal of the local files. In
486 the example below the whole module CASTOR will be scheduled for
488 \begin{lstlisting}[language=sh]
489 % cvs remove -f CASTOR
492 \item\textbf{commit} checks in the local modifications to the
493 repository and increments the versions of the files. In the example
494 below all the changes made in the different files of the module
495 STEER will be committed to the repository. The -m option is
496 followed by the log message. In case you don't provide it you will
497 be prompted by an editor window. No commit is possible without the
498 log message which explains what was done.
499 \begin{lstlisting}[language=sh]
500 % cvs -qz9 commit -m ``Coding convention'' STEER
503 \item\textbf{tag} creates new tags and/or branches (with -b option).
504 \begin{lstlisting}[language=sh]
505 % cvs tag -b v4-05-Release .
507 \item\textbf{status} returns the actual status of a file: revision,
508 sticky tag, dates, options, and local modifications.
509 \begin{lstlisting}[language=sh]
510 % cvs status Makefile
513 \item\textbf{logout} removes the password which is stored in
514 \$HOME/.cvspass. It is not really necessary unless the user really
515 wants to remove the password from that account.
519 % -----------------------------------------------------------------------------
521 \subsection{Environment variables}
523 Before the installation of AliRoot the user has to set some
524 environment variables. In the following examples the user is working
525 on Linux and the default shell is bash. It is enough to add to the
526 .bash\_profile file few lines as shown below:
528 \begin{lstlisting}[language=sh]
530 export ROOTSYS=/home/mydir/root
531 export PATH=$PATH\:$ROOTSYS/bin
532 export LD_LIBRARY_PATH=$LD_LIBRARY_PATH\:$ROOTSYS/lib
535 export ALICE=/home/mydir/alice
536 export ALICE_ROOT=$ALICE/AliRoot
537 export ALICE_TARGET=`root-config --arch`
538 export PATH=$PATH\:$ALICE_ROOT/bin/tgt_${ALICE_TARGET}
539 export LD_LIBRARY_PATH=$LD_LIBRARY_PATH\:$ALICE_ROOT/lib/tgt_${ALICE_TARGET}
542 export PLATFORM=`root-config --arch` # Optional, defined otherwise in Geant3 Makefile
544 LD_LIBRARY_PATH=$LD_LIBRARY_PATH\:$ALICE/geant3/lib/tgt_${ALICE_TARGET}
547 export FLUPRO=$ALICE/fluka # $FLUPRO is used in TFluka
548 export PATH=$PATH\:$FLUPRO/flutil
550 # Geant4: see the details later
553 where ``/home/mydir'' has to be replaced with the actual directory
554 path. The meaning of the environment variables is the following:
556 \texttt{ROOTSYS} -- the place where the ROOT package is located;
558 \texttt{ALICE} -- top directory for all the software packages used in ALICE;
560 \texttt{ALICE\_ROOT} -- the place where the AliRoot package is located, usually
561 as subdirectory of ALICE;
563 \texttt{ALICE\_TARGET} -- specific platform name. Up to release
564 v4-01-Release this variable was set to the result of ``uname''
565 command. Starting from AliRoot v4-02-05 the ROOT naming schema was
566 adopted, and the user has to use ``root-config --arch'' command.
568 \texttt{PLATFORM} -- the same as ALICE\_TARGET for the GEANT~3
569 package. Until GEANT~3 v1-0 the user had to use `uname` to specify the
570 platform. From version v1-0 on the ROOT platform is used instead
571 (``root-config --arch''). This environment variable is set by default
572 in the Geant3 Makefile.
575 % -----------------------------------------------------------------------------
577 \subsection{Software packages}
579 \subsubsection{AliEn}
581 The installation of AliEn is the first one to be done if you plan to
582 access the GRID or need GRID-enabled Root. You can download the AliEn
583 installer and use it in the following way:
584 \begin{lstlisting}[language=sh, title={AliEn installation}]
585 % wget http://alien.cern.ch/alien-installer
586 % chmod +x alien-installer
589 The alien-installer runs a dialog which prompts for the default
590 selection and options. The default installation place for AliEn is
591 /opt/alien, and the typical packages one has to install are ``client''
596 All ALICE offline software is based on ROOT\cite{ROOT}. The ROOT
597 framework offers a number of important elements which are exploited in
601 \item a complete data analysis framework including all the PAW
603 \item an advanced Graphic User Interface (GUI) toolkit;
604 \item a large set of utility functions, including several commonly
605 used mathematical functions, random number generators,
606 multi-parametric fit and minimization procedures;
607 \item a complete set of object containers;
608 \item integrated I/O with class schema evolution;
609 \item C++ as a scripting language;
610 \item documentation tools.
612 There is a nice ROOT user's guide which incorporates important and
613 detailed information. For those who are not familiar with ROOT a good
614 staring point is the ROOT Web page at \url{http://root.cern.ch}. Here
615 the experienced users may find easily the latest version of the class
616 descriptions and search for useful information.
619 The recommended way to install ROOT is from the CVS sources, as it is
623 \item Login to the ROOT CVS repository if you haven't done it yet.
624 \begin{lstlisting}[language=sh]
625 % cvs -d :pserver:cvs@root.cern.ch:/user/cvs login
629 \item Download (check out) the needed ROOT version (v5-13-04 in the example)
630 \begin{lstlisting}[language=sh]
631 % cvs -d :pserver:cvs@root.cern.ch:/user/cvs co -r v5-13-04 root
633 The appropriate combinations of Root, Geant3 and AliRoot versions
635 \url{http://aliceinfo.cern.ch/Offline/AliRoot/Releases.html}
637 \item The code is stored in the directory ``root''. You have to go
638 there, set the ROOTSYS environment variable (if this is not done in
639 advance),and configure ROOT. The ROOTSYS contains the full path to
642 \lstinputlisting[language=sh, title={Root configuration}]{scripts/confroot}
644 \item Now you can compile and test ROOT
645 \lstinputlisting[language=sh,title={Compiling and testing
646 ROOT}]{scripts/makeroot}
650 At this point the user should have a working ROOT version on a Linux
651 (32 bit Pentium processor with gcc compiler). The list of supported
652 platforms can be obtained by ``./configure --help'' command.
654 \subsubsection{GEANT~3}
656 The installation of GEANT~3 is needed since for the moments this is
657 the default particle transport package. A GEANT~3 description is
659 \url{http://wwwasdoc.web.cern.ch/wwwasdoc/geant_html3/geantall.html}.
660 You can download the GEANT~3 distribution from the ROOT CVS repository
661 and compile it in the following way:
663 \lstinputlisting[language=sh,title={Make GEANT3}]{scripts/makeg3}
665 Please note that GEANT~3 is downloaded in \$ALICE directory. Another
666 important feature is the PLATFORM environment variable. If it is not
667 set, the Geant3 Makefile sets it to the result of `root-config
670 \subsubsection{GEANT~4}
671 To use GEANT~4\cite{Geant4}, some additional software has to
672 be installed. GEANT~4 needs CLHEP\cite{CLHEP} package, the user can
673 get the tar file (here on ``tarball'') from
674 \url{http://proj-clhep.web.cern.ch/proj-clhep/}.
675 Then the installation can be done in the following way:
677 \lstinputlisting[language=sh, title={Make CLHEP}]{scripts/makeclhep}
680 Another possibility is to use the CLHEP CVS repository:
682 \lstinputlisting[language=sh, title={Make CLHEP from
683 CVS}]{scripts/makeclhepcvs}
685 Now the following lines should be added to the .bash\_profile
687 \begin{lstlisting}[language=sh]
688 % export CLHEP_BASE_DIR=$ALICE/CLHEP
691 The next step is to install GEANT~4. The GEANT~4 distribution is available from
692 \url{http://geant4.web.cern.ch/geant4/}. Typically the following files
693 will be downloaded (the current versions may differ from the ones below):
695 \item geant4.8.1.p02.tar.gz: source tarball
696 \item G4NDL.3.9.tar.gz: G4NDL version 3.9 neutron data files with thermal cross sections
697 \item G4EMLOW4.0.tar.gz: data files for low energy electromagnetic processes - version 4.0
698 \item PhotonEvaporation.2.0.tar.gz: data files for photon evaporation - version 2.0
699 \item RadiativeDecay.3.0.tar.gz: data files for radioactive decay hadronic processes - version 3.0
700 \item G4ELASTIC.1.1.tar.gz: data files for high energy elastic scattering processes - version 1.1
703 Then the following steps have to be executed:
705 \lstinputlisting[language=sh, title={Make GEANT4}]{scripts/makeg4}
707 The execution of the env.sh script can be made from the
708 \texttt{\~{}/.bash\_profile} to have the GEANT~4 environment variables
709 initialized automatically.
711 \subsubsection{FLUKA}
713 The installation of FLUKA\cite{FLUKA} consists of the following steps:
717 \item register as FLUKA user at \url{http://www.fluka.org} if you
718 haven't yet done so. You will receive your ``fuid'' number and will set
721 \item download the latest FLUKA version from
722 \url{http://www.fluka.org}. Use your ``fuid'' registration and
723 password when prompted. You will obtain a tarball containing the
724 FLUKA libraries, for example fluka2006.3-linuxAA.tar.gz
726 \item install the libraries;
728 \lstinputlisting[language=sh, title={install FLUKA}]{scripts/makefluka}
730 \item compile TFluka;
732 \begin{lstlisting}[language=sh]
737 \item run AliRoot using FLUKA;
738 \begin{lstlisting}[language=sh]
739 % cd $ALICE_ROOT/TFluka/scripts
743 This script creates the directory tmp and inside all the necessary
744 links for data and configuration files and starts aliroot. For the
745 next run it is not necessary to run the script again. The tmp
746 directory can be kept or renamed. The user should run aliroot from
747 inside this directory.
749 \item from the AliRoot prompt start the simulation;
750 \begin{lstlisting}[language=C++]
751 root [0] AliSimulation sim;
755 You will get the results of the simulation in the tmp directory.
757 \item reconstruct the simulated event;
758 \begin{lstlisting}[language=sh]
763 and from the AliRoot prompt
764 \begin{lstlisting}[language=C++]
765 root [0] AliReconstruction rec;
769 \item report any problem you encounter to the offline list \url{alice-off@cern.ch}.
774 \subsubsection{AliRoot}
776 The AliRoot distribution is taken from the CVS repository and then
777 \begin{lstlisting}[language=C++]
779 % cvs -qz2 -d :pserver:cvs@alisoft.cern.ch:/soft/cvsroot co AliRoot
784 The AliRoot code (the above example retrieves the HEAD version from CVS) is contained in
785 ALICE\_ROOT directory. The ALICE\_TARGET is defined automatically in
786 the \texttt{.bash\_profile} via the call to `root-config --arch`.
790 \subsection{Debugging}
792 While developing code or running some ALICE program, the user may be
793 confronted with the following execution errors:
796 \item floating exceptions: division by zero, sqrt from negative
797 argument, assignment of NaN, etc.
798 \item segmentation violations/faults: attempt to access a memory
799 location that is not allowed to access, or in a way which is not
801 \item bus error: attempt to access memory that the computer cannot
805 In this case, the user will have to debug the program to determine the
806 source of the problem and fix it. There is several debugging
807 techniques, which are briefly listed below:
810 \item using \texttt{printf(...)}, \texttt{std::cout}, \texttt{assert(...)}, and
813 \item often this is the only easy way to find the origin of the
815 \item \texttt{assert(...)} aborts the program execution if the
816 argument is FALSE. It is a macro from \texttt{cassert}, it can be
817 inactivated by compiling with -DNDEBUG.
821 \item gdb needs compilation with -g option. Sometimes -O2 -g
822 prevents from exact tracing, so it is save to use compilation with
823 -O0 -g for debugging purposes;
824 \item One can use it directly (gdb aliroot) or attach it to a
825 process (gdb aliroot 12345 where 12345 is the process id).
829 Below we report the main gdb commands and their descriptions:
832 \item \textbf{run} starts the execution of the program;
833 \item \textbf{Control-C} stops the execution and switches to the gdb shell;
834 \item \textbf{where <n>} prints the program stack. Sometimes the program
835 stack is very long. The user can get the last n frames by specifying
836 n as a parameter to where;
837 \item \textbf{print} prints the value of a variable or expression;
839 \begin{lstlisting}[language=sh]
842 \item \textbf{up} and \textbf{down} are used to navigate in the program stack;
843 \item \textbf{quit} exits the gdb session;
844 \item \textbf{break} sets break point;
846 \begin{lstlisting}[language=C++]
847 (gdb) break AliLoader.cxx:100
848 (gdb) break 'AliLoader::AliLoader()'
851 The automatic completion of the class methods via tab is available
852 in case an opening quote (`) is put in front of the class name.
854 \item \textbf{cont} continues the run;
855 \item \textbf{watch} sets watchpoint (very slow execution). The example below
856 shows how to check each change of fData;
858 \begin{lstlisting}[language=C++]
861 \item \textbf{list} shows the source code;
862 \item \textbf{help} shows the description of commands.
866 \subsection{Profiling}
868 Profiling is used to discover where the program spends most of the
869 time, and to optimize the algorithms. There are several profiling
870 tools available on different platforms:
873 gprof: compilation with -pg option, static libraries\\
874 oprofile: uses kernel module\\
875 VTune: instruments shared libraries.
876 \item Sun: Sun workshop (Forte agent). It needs compilation with
877 profiling option (-pg)
878 \item Compaq Alpha: pixie profiler. Instruments shared libraries for profiling.
881 On Linux AliRoot can be built with static libraries using the special
884 \begin{lstlisting}[language=sh]
886 # change LD_LIBRARY_PATH to replace lib/tgt_linux with lib/tgt_linuxPROF
887 # change PATH to replace bin/tgt_linux with bin/tgt_linuxPROF
889 root [0] gAlice->Run()
893 After the end of aliroot session a file called gmon.out will be created. It
894 contains the profiling information which can be investigated using
897 \begin{lstlisting}[language=sh]
898 % gprof `which aliroot` | tee gprof.txt
904 \textbf{VTune profiling tool}
906 VTune is available from the Intel Web site
907 \url{http://www.intel.com/software/products/index.htm}. It is free for
908 non-commercial use on Linux. It provides possibility for call-graph
909 and sampling profiling. VTune instruments shared libraries, and needs
910 only -g option during the compilation. Here is an example of
911 call-graph profiling:
913 \begin{lstlisting}[language=sh]
914 # Register an activity
915 % vtl activity sim -c callgraph -app aliroot,'' -b -q sim.C'' -moi aliroot
918 % vtl view sim::r1 -gui
921 \subsection{Detection of run time errors}
923 The Valgrind tool can be used for detection of run time errors on
924 linux. It is available from \url{http://www.valgrind.org}. Valgrind
925 is equipped with the following set of tools:
927 \item memcheck for memory management problems;
928 \item addrcheck: lightweight memory checker;
929 \item cachegrind: cache profiler;
930 \item massif: heap profiler;
931 \item hellgrind: thread debugger;
932 \item callgrind: extended version of cachegrind.
935 The most important tool is memcheck. It can detect:
937 \item use of non-initialized memory;
938 \item reading/writing memory after it has been free'd;
939 \item reading/writing off the end of malloc'd blocks;
940 \item reading/writing inappropriate areas on the stack;
941 \item memory leaks -- where pointers to malloc'd blocks are lost forever;
942 \item mismatched use of malloc/new/new [] vs free/delete/delete [];
943 \item overlapping source and destination pointers in memcpy() and
945 \item some misuses of the POSIX pthreads API;
948 Here is an example of Valgrind usage:
950 \begin{lstlisting}[language=sh]
951 % valgrind --tool=addrcheck --error-limit=no aliroot -b -q sim.C
955 %\textbf{ROOT memory checker}
957 % The ROOT memory checker provides tests of memory leaks and other
958 % problems related to new/delete. It is fast and easy to use. Here is
961 % \item link aliroot with -lNew. The user has to add `\-\-new' before
962 % `\-\-glibs' in the ROOTCLIBS variable of the Makefile;
963 % \item add Root.MemCheck: 1 in .rootrc
964 % \item run the program: aliroot -b -q sim.C
965 % \item run memprobe -e aliroot
966 % \item Inspect the files with .info extension that have been generated.
969 \subsection{Useful information LSF and CASTOR}
971 \textbf{The information in this section is included for completeness: the
972 users are strongly advised to rely on the GRID tools for massive
973 productions and data access}
975 LSF is the batch system at CERN. Every user is allowed to submit jobs
976 to the different queues. Usually the user has to copy some input files
977 (macros, data, executables, libraries) from a local computer or from
978 the mass-storage system to the worker node on lxbatch, then to execute
979 the program, and to store the results on the local computer or in the
980 mass-storage system. The methods explained in the section are suitable
981 if the user doesn't have direct access to a shared directory, for
982 example on AFS. The main steps and commands are described below.
984 In order to have access to the local desktop and to be able to use scp
985 without password, the user has to create pair of SSH keys. Currently
986 lxplus/lxbatch uses RSA1 cryptography. After login into lxplus the
987 following has to be done:
989 \begin{lstlisting}[language=sh]
992 % cp .ssh/identity.pub public/authorized_keys
993 % ln -s ../public/authorized_keys .ssh/authorized_keys
996 A list of useful LSF commands is given bellow:
998 \item \textbf{bqueues} shows the available queues and their status;
999 \item \textbf{ bsub -q 8nm job.sh} submits the shell script job.sh to
1000 the queue 8nm, where the name of the queue indicates the
1001 ``normalized CPU time'' (maximal job duration 8 min of normalized CPU time);
1002 \item \textbf{bjobs} lists all unfinished jobs of the user;
1003 \item \textbf{lsrun -m lxbXXXX xterm} returns a xterm running on the
1004 batch node lxbXXXX. This permits to inspect the job output and to
1008 Each batch job stores the output in directory LSFJOB\_XXXXXX, where
1009 XXXXXX is the job id. Since the home directory is on AFS, the user has
1010 to redirect the verbose output, otherwise the AFS quota might be
1011 exceeded and the jobs will fail.
1013 The CERN mass storage system is CASTOR2\cite{CASTOR2}. Every user has
1014 his/her own CASTOR2 space, for example /castor/cern.ch/user/p/phristov.
1015 The commands of CASTOR2 start with prefix ``ns'' of ``rf''. Here is
1016 very short list of useful commands:
1019 \item \textbf{nsls /castor/cern.ch/user/p/phristov} lists the CASTOR
1020 space of user phristov;
1021 \item \textbf{rfdir /castor/cern.ch/user/p/phristov} the same as
1022 above, but the output is in long format;
1023 \item \textbf{nsmkdir test} creates a new directory (test) in the
1024 CASTOR space of the user;
1025 \item \textbf{rfcp /castor/cern.ch/user/p/phristov/test/galice.root .}
1026 copies the file from CASTOR to the local directory. If the file is
1027 on tape, this will trigger the stage-in procedure, which might take
1029 \item \textbf{rfcp AliESDs.root /castor/cern.ch/p/phristov/test}
1030 copies the local file AliESDs.root to CASTOR in the subdirectory
1031 test and schedules it for migration to tape.
1034 The user also has to be aware, that the behavior of CASTOR depends on
1035 the environment variables RFIO\_USE\_CASTOR\_V2(=YES),
1036 STAGE\_HOST(=castoralice) and STAGE\_SVCCLASS(=default). They are set
1037 by default to the values for the group (z2 in case of ALICE).
1039 Below the user can find an example of job, where the simulation and
1040 reconstruction are run using the corresponding macros sim.C and rec.C.
1041 An example of such macros will be given later.
1043 \lstinputlisting[language=sh,title={LSF example job}]{scripts/lsfjob}
1045 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
1048 \section{Simulation} \label{Simulation}
1050 % -----------------------------------------------------------------------------
1052 \subsection{Introduction}
1053 Heavy-ion collisions produce a very large number of particles in the
1054 final state. This is a challenge for the reconstruction and analysis
1055 algorithms. The detector design and the development of these algorithms requires a predictive
1056 and precise simulation of the detector response. Model predictions
1057 discussed in the first volume of Physics Performance Report for the
1058 charged multiplicity at LHC in \mbox{Pb--Pb} collisions vary from 1400
1059 to 8000 particles in the central unit of rapidity. The experiment was
1060 designed when the highest available nucleon--nucleon center-of-mass energy
1061 heavy-ion interactions was at $20 \, {\rm GeV}$ per nucleon--nucleon
1062 pair at CERN SPS, i.e. a factor of about 300 less than the energy at
1063 LHC. Recently, the RHIC collider came online. Its top energy of
1064 $200\, {\rm GeV}$ per nucleon--nucleon pair is still 30 times less
1065 than the LHC energy. The RHIC data seem to suggest that the LHC
1066 multiplicity will be on the lower side of the interval. However, the
1067 extrapolation is so large that both the hardware and software of ALICE
1068 have to be designed for the highest multiplicity. Moreover, as the
1069 predictions of different generators of heavy-ion collisions differ
1070 substantially at LHC energies, we have to use several of them and
1071 compare the results.
1073 The simulation of the processes involved in the transport through the
1074 detector of the particles emerging from the interaction is confronted
1075 with several problems:
1078 \item existing event generators give different answers on parameters
1079 such as expected multiplicities, $p_T$-dependence and rapidity
1080 dependence at LHC energies.
1082 \item most of the physics signals, like hyperon production, high-$p_T$
1083 phenomena, open charm and beauty, quarkonia etc., are not exactly
1084 reproduced by the existing event generators.
1086 \item simulation of small cross-sections would demand prohibitively
1087 high computing resources to simulate a number of events that is commensurable with
1088 the expected number of detected events in the experiment.
1090 \item the existing generators do not provide for event topologies like
1091 momentum correlations, azimuthal flow etc.
1094 To allow nevertheless efficient simulations we have adopted a
1095 framework that allows for a number of options:
1099 \item{} the simulation framework provides an interface to external
1100 generators, like HIJING~\cite{MC:HIJING} and
1101 DPMJET~\cite{MC:DPMJET}.
1103 \item{} a parameterized, signal-free, underlying event where the
1104 produced multiplicity can be specified as an input parameter is
1107 \item{} rare signals can be generated using the interface to external
1108 generators like PYTHIA or simple parameterizations of transverse
1109 momentum and rapidity spectra defined in function libraries.
1111 \item{} the framework provides a tool to assemble events from
1112 different signal generators (event cocktails).
1114 \item{} the framework provides tools to combine underlying events and
1115 signal events at the primary particle level (cocktail) and at the
1116 summable digit level (merging).
1118 \item{} ``afterburners'' are used to introduce particle correlations in a
1119 controlled way. An afterburner is a program which changes the
1120 momenta of the particles produced by another generator, and thus
1121 modifies as desired the multi-particle momentum distributions.
1124 The implementation of this strategy is described below. The results of
1125 different \MC generators for heavy-ion collisions are
1126 described in section~\ref{MC:Generators}.
1128 \subsection{Simulation framework}
1130 The simulation framework covers the simulation of primary collisions
1131 and generation of the emerging particles, the transport of particles
1132 through the detector, the simulation of energy depositions (hits) in
1133 the detector components, their response in form of so called summable
1134 digits, the generation of digits from summable digits with the
1135 optional merging of underlying events and the creation of raw data.
1136 The \class{AliSimulation} class provides a simple user interface to
1137 the simulation framework. This section focuses on the simulation
1138 framework from the (detector) software developers point of view.
1142 \includegraphics[width=10cm]{picts/SimulationFramework}
1143 \caption{Simulation framework.} \label{MC:Simulation}
1148 \textbf{Generation of Particles}
1150 Different generators can be used to produce particles emerging from
1151 the collision. The class \class{AliGenerator} is the base class
1152 defining the virtual interface to the generator programs. The
1153 generators are described in more detail in the ALICE PPR Volume 1 and
1154 in the next chapter.
1157 \textbf{Virtual Monte Carlo}
1159 The simulation of particles traversing the detector components is
1160 performed by a class derived from \class{TVirtualMC}. The Virtual
1161 Monte Carlo also provides an interface to construct the geometry of
1162 detectors. The task of the geometry description is done by the
1163 geometrical modeler \class{TGeo}. The concrete implementation of the
1164 virtual Monte Carlo application TVirtualMCApplication is AliMC. The
1165 Monte Carlos used in ALICE are GEANT~3.21, GEANT~4 and FLUKA. More
1166 information can be found on the VMC Web page:
1167 \url{http://root.cern.ch/root/vmc}
1169 As explained above, our strategy was to develop a virtual interface to
1170 the detector simulation code. We call the interface to the transport
1171 code virtual Monte Carlo. It is implemented via C++ virtual classes
1172 and is schematically shown in Fig.~\ref{MC:vmc}. The codes that
1173 implement the abstract classes are real C++ programs or wrapper
1174 classes that interface to FORTRAN programs.
1178 \includegraphics[width=10cm]{picts/vmc}
1179 \caption{Virtual \MC} \label{MC:vmc}
1182 Thanks to the virtual Monte Carlo we have converted all FORTRAN user
1183 code developed for GEANT~3 into C++, including the geometry definition
1184 and the user scoring routines, \texttt{StepManager}. These have been
1185 integrated in the detector classes of the AliRoot framework. The
1186 output of the simulation is saved directly with ROOT I/O, simplifying
1187 the development of the digitization and reconstruction code in C++.
1190 \textbf{Modules and Detectors}
1192 Each module of the ALICE detector is described by a class derived from
1193 \class{AliModule}. Classes for active modules (= detectors) are not
1194 derived directly from \class{AliModule} but from its subclass
1195 \class{AliDetector}. These base classes define the interface to the
1196 simulation framework via a set of virtual methods.
1199 \textbf{Configuration File (Config.C)}
1201 The configuration file is a C++ macro that is processed before the
1202 simulation starts. It creates and configures the Monte Carlo object,
1203 the generator object, the magnetic field map and the detector modules.
1204 A detailed description is given below.
1207 \textbf{Detector Geometry}
1209 The virtual Monte Carlo application creates and initializes the
1210 geometry of the detector modules by calling the virtual functions
1211 \method{CreateMaterials}, \method{CreateGeometry}, \method{Init} and
1212 \method{BuildGeometry}.
1215 \textbf{Vertexes and Particles}
1217 In case the simulated event is intended to be merged with an
1218 underlying event, the primary vertex is taken from the file containing
1219 the underlying event by using the vertex generator
1220 \class{AliVertexGenFile}. Otherwise the primary vertex is generated
1221 according to the generator settings. Then the particles emerging from
1222 the collision are generated and put on the stack (an instance of
1223 \class{AliStack}). The transport of particles through the detector is
1224 performed by the Monte Carlo object. The decay of particles is usually
1225 handled by the external decayer \class{AliDecayerPythia}.
1228 \textbf{Hits and Track References}
1230 The Monte Carlo simulates the transport of a particle step by step.
1231 After each step the virtual method \method{StepManager} of the module
1232 in which the particle currently is located is called. In this step
1233 manager method, the hits in the detector are created by calling
1234 \method{AddHit}. Optionally also track references (location and
1235 momentum of simulated particles at selected places) can be created by
1236 calling \method{AddTackReference}. \method{AddHit} has to be
1237 implemented by each detector whereas \method{AddTackReference} is
1238 already implemented in AliModule. The container and the branch for the
1239 hits -- and for the (summable) digits -- are managed by the detector
1240 class via a set of so-called loaders. The relevant data members and
1241 methods are fHits, fDigits, \method{ResetHits}, \method{ResetSDigits},
1242 \method{ResetDigits},\method{MakeBranch} and \method{SetTreeAddress}.
1244 For each detector methods like \method{PreTrack}, \method{PostTrack},
1245 \method{FinishPrimary}, \method{FinishEvent} and \method{FinishRun}
1246 are called during the simulation when the conditions indicated by the
1247 method names are fulfilled.
1250 \textbf{Summable Digits}
1252 Summable digits are created by calling the virtual method Hits2SDigits
1253 of a detector. This method loops over all events, creates the summable
1254 digits from hits and stores them in the sdigits file(s).
1257 \textbf{ Digitization and Merging}
1259 Dedicated classes derived from \class{AliDigitizer} are used for the
1260 conversion of summable digits into digits. Since \class{AliDigitizer}
1261 is a \class{TTask}, this conversion is done for
1262 the current event by the \method{Exec} method. Inside this method the summable
1263 digits of all input streams have to be added, combined with noise,
1264 converted to digital values taking into account possible thresholds
1265 and stored in the digits container.
1267 The input streams (more than one in case of merging) as well as the
1268 output stream are managed by an object of type \method{AliRunDigitizer}. The
1269 methods GetNinputs, GetInputFolderName and GetOutputFolderName return
1270 the relevant information. The run digitizer is accessible inside the
1271 digitizer via the protected data member fManager. If the flag
1272 fRegionOfInterest is set, only detector parts where summable digits
1273 from the signal event are present should be digitized. When \MC labels
1274 are assigned to digits, the stream-dependent offset given by the
1275 method \method{GetMask} is added to the label of the summable digit.
1277 The detector specific digitizer object is created in the virtual
1278 method CreateDigitizer of the concrete detector class. The run
1279 digitizer object is used to construct the detector
1280 digitizer. The \method{Init} method of each digitizer is called before the loop
1281 over the events starts.
1284 A direct conversion from hits directly to digits can be implemented in
1285 the method \method{Hits2Digits} of a detector. The loop over the events is
1286 inside the method. Of course merging is not supported in this case.
1288 An example of simulation script that can be used for simulation of
1289 proton-proton collisions is provided below:
1291 \begin{lstlisting}[language=C++, title={Simulation run}]
1292 void sim(Int_t nev=100) {
1293 AliSimulation simulator;
1294 // Measure the total time spent in the simulation
1297 // List of detectors, where both summable digits and digits are provided
1298 simulator.SetMakeSDigits("TRD TOF PHOS EMCAL RICH MUON ZDC PMD FMD START VZERO");
1299 // Direct conversion of hits to digits for faster processing (ITS TPC)
1300 simulator.SetMakeDigitsFromHits("ITS TPC");
1307 The following example shows how one can do event merging
1309 \begin{lstlisting}[language=C++, title={Event merging}]
1310 void sim(Int_t nev=6) {
1311 AliSimulation simulator;
1312 // The underlying events are stored in a separate directory.
1313 // Three signal events will be merged in turn with each
1315 simulator.MergeWith("../backgr/galice.root",3);
1323 The digits stored in ROOT containers can be converted into the DATE\cite{DATE}
1324 format that will be the `payload' of the ROOT classes containing the
1325 raw data. This is done for the current event in the method
1326 \method{Digits2Raw} of the detector.
1328 The simulation of raw data is managed by the class \class{AliSimulation}. To
1329 create raw data DDL files it loops over all events. For each event it
1330 creates a directory, changes to this directory and calls the method
1331 \method{Digits2Raw} of each selected detector. In the Digits2Raw method the DDL
1332 files of a detector are created from the digits for the current
1335 For the conversion of the DDL files to a DATE file the
1336 \class{AliSimulation} class uses the tool dateStream. To create a raw
1337 data file in ROOT format with the DATE output as payload the program alimdc is
1340 The only part that has to be implemented in each detector is
1341 the \method{Digits2Raw} method of the detectors. In this method one file per
1342 DDL has to be created obeying the conventions for file names and DDL
1343 IDs. Each file is a binary file with a DDL data header in the
1344 beginning. The DDL data header is implemented in the structure
1345 \class{AliRawDataHeader}. The data member fSize should be set to the total
1346 size of the DDL raw data including the size of the header. The
1347 attribute bit 0 should be set by calling the method \method{SetAttribute(0)} to
1348 indicate that the data in this file is valid. The attribute bit 1 can
1349 be set to indicate compressed raw data.
1351 The detector-specific raw data are stored in the DDL files after the
1352 DDL data header. The format of this raw data should be as close as
1353 possible to the one that will be delivered by the detector. This
1354 includes the order in which the channels will be read out.
1356 Below we show an example of raw data creation for all the detectors
1358 \begin{lstlisting}[language=C++]
1359 void sim(Int_t nev=1) {
1360 AliSimulation simulator;
1361 // Create raw data for ALL detectors, rootify it and store in the
1362 // file raw,root. Do not delete the intermediate files
1363 simulator.SetWriteRawData("ALL","raw.root",kFALSE);
1369 \subsection{Configuration: example of Config.C}
1371 The example below contains as comments the most important information:
1373 \lstinputlisting[language=C++] {scripts/Config.C}
1375 % -----------------------------------------------------------------------------
1377 \subsection{Event generation}
1378 \label{MC:Generators}
1381 \includegraphics[width=10cm]{picts/aligen}
1382 \caption{\texttt{AliGenerator} is the base class, which has the
1383 responsibility to generate the primary particles of an event. Some
1384 realizations of this class do not generate the particles themselves
1385 but delegate the task to an external generator like PYTHIA through the
1386 \texttt{TGenerator} interface. }
1390 \subsubsection{Parameterized generation}
1392 The event generation based on parameterization can be used to produce
1393 signal-free final states. It avoids the dependences on a
1394 specific model, and is efficient and flexible. It can be used to
1395 study the track reconstruction efficiency
1396 as a function of the initial multiplicity and occupation.
1398 \class{AliGenHIJINGparam}~\cite{MC:HIJINGparam} is an example of internal
1399 AliRoot generator based on parameterized
1400 pseudorapidity density and transverse momentum distributions of
1401 charged and neutral pions and kaons. The pseudorapidity
1402 distribution was obtained from a HIJING simulation of central
1403 Pb--Pb collisions and scaled to a charged-particle multiplicity of
1404 8000 in the pseudo rapidity interval $|\eta | < 0.5$. Note that
1405 this is about 10\% higher than the corresponding value for a
1406 rapidity density with an average ${\rm d}N/{\rm d}y$ of 8000 in
1407 the interval $|y | < 0.5$.
1408 The transverse-momentum distribution is parameterized from the
1409 measured CDF pion $p_T$-distribution at $\sqrt{s} = 1.8 \, TeV$.
1410 The corresponding kaon $p_T$-distribution was obtained from the
1411 pion distribution by $m_T$-scaling. See Ref.~\cite{MC:HIJINGparam}
1412 for the details of these parameterizations.
1414 In many cases, the expected transverse momentum and rapidity
1415 distributions of particles are known. In other cases the effect of
1416 variations in these distributions must be investigated. In both
1417 situations it is appropriate to use generators that produce
1418 primary particles and their decays sampling from parameterized
1419 spectra. To meet the different physics requirements in a modular
1420 way, the parameterizations are stored in independent function
1421 libraries wrapped into classes that can be plugged into the
1422 generator. This is schematically illustrated in
1423 Fig.~\ref{MC:evglib} where four different generator libraries can
1424 be loaded via the abstract generator interface.
1426 It is customary in heavy-ion event generation to superimpose
1427 different signals on an event to tune the reconstruction
1428 algorithms. This is possible in AliRoot via the so-called cocktail
1429 generator (Fig.~\ref{MC:cocktail}). This creates events from
1430 user-defined particle cocktails by choosing as ingredients a list
1431 of particle generators.
1435 \includegraphics[width=10cm]{picts/evglib}
1436 \caption{\texttt{AliGenParam} is a realization of \texttt{AliGenerator}
1437 that generates particles using parameterized $\pt$ and
1438 pseudo-rapidity distributions. Instead of coding a fixed number of
1439 parameterizations directly into the class implementations, user
1440 defined parameterization libraries (AliGenLib) can be connected at
1441 run time allowing for maximum flexibility.} \label{MC:evglib}
1444 An example of \class{AliGenParam} usage is presented below:
1446 \begin{lstlisting}[language=C++]
1447 // Example for J/psi Production from Parameterization
1448 // using default library (AliMUONlib)
1449 AliGenParam *gener = new AliGenParam(ntracks, AliGenMUONlib::kUpsilon);
1450 gener->SetMomentumRange(0,999); // Wide cut on the Upsilon momentum
1451 gener->SetPtRange(0,999); // Wide cut on Pt
1452 gener->SetPhiRange(0. , 360.); // Full azimutal range
1453 gener->SetYRange(2.5,4); // In the acceptance of the MUON arm
1454 gener->SetCutOnChild(1); // Enable cuts on Upsilon decay products
1455 gener->SetChildThetaRange(2,9); // Theta range for the decay products
1456 gener->SetOrigin(0,0,0); // Vertex position
1457 gener->SetSigma(0,0,5.3); // Sigma in (X,Y,Z) (cm) on IP position
1458 gener->SetForceDecay(kDiMuon); // Upsilon->mu+ mu- decay
1459 gener->SetTrackingFlag(0); // No particle transport
1463 To facilitate the usage of different generators we have developed
1464 an abstract generator interface called \texttt{AliGenerator}, see
1465 Fig.~\ref{MC:aligen}. The objective is to provide the user with
1466 an easy and coherent way to study a variety of physics signals as
1467 well as full set of tools for testing and background studies. This
1468 interface allows the study of full events, signal processes, and
1469 a mixture of both, i.e. cocktail events (see an example later).
1471 Several event generators are available via the abstract ROOT class
1472 that implements the generic generator interface, \texttt{TGenerator}.
1473 Through implementations of this abstract base class we wrap
1474 FORTRAN \MC codes like PYTHIA, HERWIG, and HIJING that are
1475 thus accessible from the AliRoot classes. In particular the
1476 interface to PYTHIA includes the use of nuclear structure
1477 functions of LHAPDF.
1480 \subsubsection{Pythia6}
1482 Pythia is used for simulation of proton-proton interactions and for
1483 generation of jets in case of event merging. An example of minimum
1484 bias Pythia events is presented below:
1486 \begin{lstlisting}[language=C++]
1487 AliGenPythia *gener = new AliGenPythia(-1);
1488 gener->SetMomentumRange(0,999999);
1489 gener->SetThetaRange(0., 180.);
1490 gener->SetYRange(-12,12);
1491 gener->SetPtRange(0,1000);
1492 gener->SetProcess(kPyMb); // Min. bias events
1493 gener->SetEnergyCMS(14000.); // LHC energy
1494 gener->SetOrigin(0, 0, 0); // Vertex position
1495 gener->SetSigma(0, 0, 5.3); // Sigma in (X,Y,Z) (cm) on IP position
1496 gener->SetCutVertexZ(1.); // Truncate at 1 sigma
1497 gener->SetVertexSmear(kPerEvent);// Smear per event
1498 gener->SetTrackingFlag(1); // Particle transport
1503 \subsubsection{HIJING}
1504 HIJING (Heavy-Ion Jet Interaction Generator) combines a
1505 QCD-inspired model of jet production~\cite{MC:HIJING} with the
1506 Lund model~\cite{MC:LUND} for jet fragmentation. Hard or
1507 semi-hard parton scatterings with transverse momenta of a few GeV
1508 are expected to dominate high-energy heavy-ion collisions. The
1509 HIJING model has been developed with special emphasis on the role
1510 of mini jets in pp, pA and A--A reactions at collider energies.
1512 Detailed systematic comparisons of HIJING results with a wide
1513 range of data demonstrates a qualitative understanding of the
1514 interplay between soft string dynamics and hard QCD interactions.
1515 In particular, HIJING reproduces many inclusive spectra,
1516 two-particle correlations, and the observed flavor and
1517 multiplicity dependence of the average transverse momentum.
1519 The Lund FRITIOF~\cite{MC:FRITIOF} model and the Dual Parton
1520 Model~\cite{MC:DPM} (DPM) have guided the formulation of HIJING
1521 for soft nucleus--nucleus reactions at intermediate energies,
1522 $\sqrt{s_{\rm NN}}\approx 20\, GeV$. The hadronic-collision
1523 model has been inspired by the successful implementation of
1524 perturbative QCD processes in PYTHIA~\cite{MC:PYTH}. Binary
1525 scattering with Glauber geometry for multiple interactions are
1526 used to extrapolate to pA and A--A collisions.
1528 Two important features of HIJING are jet quenching and nuclear
1529 shadowing. Jet quenching is the energy loss by partons in nuclear
1530 matter. It is responsible for an increase of the particle
1531 multiplicity at central rapidities. Jet quenching is modeled by an
1532 assumed energy loss by partons traversing dense matter. A simple
1533 color configuration is assumed for the multi-jet system and the Lund
1534 fragmentation model is used for the hadronisation. HIJING does not
1535 simulate secondary interactions.
1537 Shadowing describes the modification of the free nucleon parton
1538 density in the nucleus. At the low-momentum fractions, $x$,
1539 observed by collisions at the LHC, shadowing results in a decrease
1540 of the multiplicity. Parton shadowing is taken into account using
1541 a parameterization of the modification.
1543 Here is an example of event generation with HIJING:
1545 \begin{lstlisting}[language=C++]
1546 AliGenHijing *gener = new AliGenHijing(-1);
1547 gener->SetEnergyCMS(5500.); // center of mass energy
1548 gener->SetReferenceFrame("CMS"); // reference frame
1549 gener->SetProjectile("A", 208, 82); // projectile
1550 gener->SetTarget ("A", 208, 82); // projectile
1551 gener->KeepFullEvent(); // HIJING will keep the full parent child chain
1552 gener->SetJetQuenching(1); // enable jet quenching
1553 gener->SetShadowing(1); // enable shadowing
1554 gener->SetDecaysOff(1); // neutral pion and heavy particle decays switched off
1555 gener->SetSpectators(0); // Don't track spectators
1556 gener->SetSelectAll(0); // kinematic selection
1557 gener->SetImpactParameterRange(0., 5.); // Impact parameter range (fm)
1561 \subsubsection{Additional universal generators}
1563 The following universal generators are available in AliRoot:
1566 \item DPMJET: this is an implementation of the dual parton
1567 model\cite{MC:DPMJET};
1568 \item ISAJET: a \MC event generator for pp, $\bar pp$, and $e^=e^-$
1569 reactions\cite{MC:ISAJET};
1570 \item HERWIG: \MC package for simulating Hadron Emission
1571 Reactions With Interfering Gluons\cite{MC:HERWIG}.
1574 An example of HERWIG configuration in the Config.C is shown below:
1575 \begin{lstlisting}[language=C++]
1576 AliGenHerwig *gener = new AliGenHerwig(-1);
1577 // final state kinematic cuts
1578 gener->SetMomentumRange(0,7000);
1579 gener->SetPhiRange(0. ,360.);
1580 gener->SetThetaRange(0., 180.);
1581 gener->SetYRange(-10,10);
1582 gener->SetPtRange(0,7000);
1583 // vertex position and smearing
1584 gener->SetOrigin(0,0,0); // vertex position
1585 gener->SetVertexSmear(kPerEvent);
1586 gener->SetSigma(0,0,5.6); // Sigma in (X,Y,Z) (cm) on IP position
1588 gener->SetBeamMomenta(7000,7000);
1590 gener->SetProjectile("P");
1591 gener->SetTarget("P");
1592 // Structure function
1593 gener->SetStrucFunc(kGRVHO);
1595 gener->SetPtHardMin(200);
1596 gener->SetPtRMS(20);
1598 gener->SetProcess(8000);
1601 \subsubsection{Generators for specific studies}
1605 MEVSIM~\cite{MC:MEVSIM} was developed for the STAR experiment to
1606 quickly produce a large number of A--A collisions for some
1607 specific needs -- initially for HBT studies and for testing of
1608 reconstruction and analysis software. However, since the user is
1609 able to generate specific signals, it was extended to flow and
1610 event-by-event fluctuation analysis. A detailed description of
1611 MEVSIM can be found in Ref.~\cite{MC:MEVSIM}.
1613 MEVSIM generates particle spectra according to a momentum model
1614 chosen by the user. The main input parameters are: types and
1615 numbers of generated particles, momentum-distribution model,
1616 reaction-plane and azimuthal-anisotropy coefficients, multiplicity
1617 fluctuation, number of generated events, etc. The momentum models
1618 include factorized $p_T$ and rapidity distributions, non-expanding
1619 and expanding thermal sources, arbitrary distributions in $y$ and
1620 $p_T$ and others. The reaction plane and azimuthal anisotropy is
1621 defined by the Fourier coefficients (maximum of six) including
1622 directed and elliptical flow. Resonance production can also be
1625 MEVSIM was originally written in FORTRAN. It was later integrated into
1626 AliRoot. A complete description of the AliRoot implementation of MEVSIM can
1627 be found on the web page (\url{http://home.cern.ch/~radomski}).
1631 GeVSim \cite{MC:GEVSIM} is a fast and easy-to-use \MC
1632 event generator implemented in AliRoot. It can provide events of
1633 similar type configurable by the user according to the specific
1634 needs of a simulation project, in particular, that of flow and
1635 event-by-event fluctuation studies. It was developed to facilitate
1636 detector performance studies and for the test of algorithms.
1637 GeVSim can also be used to generate signal-free events to be
1638 processed by afterburners, for example HBT processor.
1640 GeVSim is based on the MevSim \cite{MC:MEVSIM} event generator
1641 developed for the STAR experiment.
1643 GeVSim generates a list of particles by randomly sampling a
1644 distribution function. The parameters of single-particle spectra
1645 and their event-by-event fluctuations are explicitly defined by
1646 the user. Single-particle transverse-momentum and rapidity spectra
1647 can be either selected from a menu of four predefined
1648 distributions, the same as in MevSim, or provided by user.
1650 Flow can be easily introduced into simulated events. The parameters of
1651 the flow are defined separately for each particle type and can be
1652 either set to a constant value or parameterized as a function of
1653 transverse momentum and rapidity. Two parameterizations of elliptic
1654 flow based on results obtained by RHIC experiments are provided.
1656 GeVSim also has extended possibilities for simulating of
1657 event-by-event fluctuations. The model allows fluctuations
1658 following an arbitrary analytically defined distribution in
1659 addition to the Gaussian distribution provided by MevSim. It is
1660 also possible to systematically alter a given parameter to scan
1661 the parameter space in one run. This feature is useful when
1662 analyzing performance with respect to, for example, multiplicity
1663 or event-plane angle.
1665 The current status and further development of GeVSim code and documentation
1666 can be found in Ref.~\cite{MC:Radomski}.
1668 \textbf{HBT processor}
1670 Correlation functions constructed with the data produced by MEVSIM
1671 or any other event generator are normally flat in the region of
1672 small relative momenta. The HBT-processor afterburner introduces
1673 two particle correlations into the set of generated particles. It
1674 shifts the momentum of each particle so that the correlation
1675 function of a selected model is reproduced. The imposed
1676 correlation effects due to Quantum Statistics (QS) and Coulomb
1677 Final State Interactions (FSI) do not affect the single-particle
1678 distributions and multiplicities. The event structures before and
1679 after passing through the HBT processor are identical. Thus, the
1680 event reconstruction procedure with and without correlations is
1681 also identical. However, the track reconstruction efficiency, momentum
1682 resolution and particle identification need not to be, since
1683 correlated particles have a special topology at small relative
1684 velocities. We can thus verify the influence of various
1685 experimental factors on the correlation functions.
1687 The method, proposed by L.~Ray and G.W.~Hoffmann \cite{MC:HBTproc}
1688 is based on random shifts of the particle three-momentum within a
1689 confined range. After each shift, a comparison is made with
1690 correlation functions resulting from the assumed model of the
1691 space--time distribution and with the single-particle spectra
1692 which should remain unchanged. The shift is kept if the
1693 $\chi^2$-test shows better agreement. The process is iterated
1694 until satisfactory agreement is achieved. In order to construct
1695 the correlation function, a reference sample is made by mixing
1696 particles from some consecutive events. Such a method has an
1697 important impact on the simulations when at least two events must
1698 be processed simultaneously.
1700 Some specific features of this approach are important for practical
1703 \item{} the HBT processor can simultaneously generate correlations of up
1704 to two particle types (e.g. positive and negative pions).
1705 Correlations of other particles can be added subsequently.
1706 \item{} the form of the correlation function has to be parameterized
1707 analytically. One and three dimensional parameterizations are
1709 \item{} a static source is usually assumed. Dynamical effects,
1711 expansion or flow, can be simulated in a stepwise form by repeating
1712 simulations for different values of the space--time parameters
1713 associated with different kinematic intervals.
1714 \item{} Coulomb effects may be introduced by one of three
1716 factor, experimentally modified Gamow correction and integrated
1717 Coulomb wave functions for discrete values of the source radii.
1718 \item{} Strong interactions are not implemented.
1721 The detailed description of the HBT processor can be found
1722 elsewhere~\cite{MC:PiotrSk}.
1724 \textbf{Flow afterburner}
1726 Azimuthal anisotropies, especially elliptic flow, carry unique
1727 information about collective phenomena and consequently are
1728 important for the study of heavy-ion collisions. Additional
1729 information can be obtained studying different heavy-ion
1730 observables, especially jets, relative to the event plane.
1731 Therefore it is necessary to evaluate the capability of ALICE to
1732 reconstruct the event plane and study elliptic flow.
1734 Since there is not a well understood microscopic description of
1735 the flow effect it cannot be correctly simulated by microscopic
1736 event generators. Therefore, to generate events with flow the user has
1737 to use event generators based on macroscopic models, like GeVSim
1738 \cite{MC:GEVSIM} or an afterburner which can generate flow on top
1739 of events generated by event generators based on the microscopic
1740 description of the interaction. In the AliRoot framework such a
1741 flow afterburner is implemented.
1743 The algorithm to apply azimuthal correlation consists in shifting the
1744 azimuthal coordinates of the particles. The transformation is given
1745 by \cite{MC:POSCANCER}:
1749 \varphi \rightarrow \varphi '=\varphi +\Delta \varphi \]
1751 \Delta \varphi =\sum _{n}\frac{-2}{n}v_{n}\left( p_{t},y\right)
1752 \sin n \times \left( \varphi -\psi \right) \] where \(
1753 v_{n}(p_{t},y) \) is the flow coefficient to be obtained, \( n \)
1754 is the harmonic number and \( \psi \) is the event-plane angle.
1755 Note that the algorithm is deterministic and does not contain any
1756 random numbers generation.
1758 The value of the flow coefficient can be either constant or parameterized as a
1759 function of transverse momentum and rapidity. Two parameterizations
1760 of elliptic flow are provided as in GeVSim.
1762 \begin{lstlisting}[language=C++]
1763 AliGenGeVSim* gener = new AliGenGeVSim(0);
1765 mult = 2000; // Mult is the number of charged particles in |eta| < 0.5
1768 Float_t sigma_eta = 2.75; // Sigma of the Gaussian dN/dEta
1769 Float_t etamax = 7.00; // Maximum eta
1771 // Scale from multiplicity in |eta| < 0.5 to |eta| < |etamax|
1772 Float_t mm = mult * (TMath::Erf(etamax/sigma_eta/sqrt(2.)) /
1773 TMath::Erf(0.5/sigma_eta/sqrt(2.)));
1775 // Scale from charged to total multiplicity
1780 // 78% Pions (26% pi+, 26% pi-, 26% p0) T = 250 MeV
1781 AliGeVSimParticle *pp =
1782 new AliGeVSimParticle(kPiPlus, 1, 0.26 * mm, 0.25, sigma_eta) ;
1783 AliGeVSimParticle *pm =
1784 new AliGeVSimParticle(kPiMinus, 1, 0.26 * mm, 0.25, sigma_eta) ;
1785 AliGeVSimParticle *p0 =
1786 new AliGeVSimParticle(kPi0, 1, 0.26 * mm, 0.25, sigma_eta) ;
1788 // 12% Kaons (3% K0short, 3% K0long, 3% K+, 3% K-) T = 300 MeV
1789 AliGeVSimParticle *ks =
1790 new AliGeVSimParticle(kK0Short, 1, 0.03 * mm, 0.30, sigma_eta) ;
1791 AliGeVSimParticle *kl =
1792 new AliGeVSimParticle(kK0Long, 1, 0.03 * mm, 0.30, sigma_eta) ;
1793 AliGeVSimParticle *kp =
1794 new AliGeVSimParticle(kKPlus, 1, 0.03 * mm, 0.30, sigma_eta) ;
1795 AliGeVSimParticle *km =
1796 new AliGeVSimParticle(kKMinus, 1, 0.03 * mm, 0.30, sigma_eta) ;
1798 // 10% Protons / Neutrons (5% Protons, 5% Neutrons) T = 250 MeV
1799 AliGeVSimParticle *pr =
1800 new AliGeVSimParticle(kProton, 1, 0.05 * mm, 0.25, sigma_eta) ;
1801 AliGeVSimParticle *ne =
1802 new AliGeVSimParticle(kNeutron, 1, 0.05 * mm, 0.25, sigma_eta) ;
1804 // Set Elliptic Flow properties
1806 Float_t pTsaturation = 2. ;
1808 pp->SetEllipticParam(vn,pTsaturation,0.) ;
1809 pm->SetEllipticParam(vn,pTsaturation,0.) ;
1810 p0->SetEllipticParam(vn,pTsaturation,0.) ;
1811 pr->SetEllipticParam(vn,pTsaturation,0.) ;
1812 ne->SetEllipticParam(vn,pTsaturation,0.) ;
1813 ks->SetEllipticParam(vn,pTsaturation,0.) ;
1814 kl->SetEllipticParam(vn,pTsaturation,0.) ;
1815 kp->SetEllipticParam(vn,pTsaturation,0.) ;
1816 km->SetEllipticParam(vn,pTsaturation,0.) ;
1818 // Set Direct Flow properties
1820 pp->SetDirectedParam(vn,1.0,0.) ;
1821 pm->SetDirectedParam(vn,1.0,0.) ;
1822 p0->SetDirectedParam(vn,1.0,0.) ;
1823 pr->SetDirectedParam(vn,1.0,0.) ;
1824 ne->SetDirectedParam(vn,1.0,0.) ;
1825 ks->SetDirectedParam(vn,1.0,0.) ;
1826 kl->SetDirectedParam(vn,1.0,0.) ;
1827 kp->SetDirectedParam(vn,1.0,0.) ;
1828 km->SetDirectedParam(vn,1.0,0.) ;
1830 // Add particles to the list
1832 gener->AddParticleType(pp) ;
1833 gener->AddParticleType(pm) ;
1834 gener->AddParticleType(p0) ;
1835 gener->AddParticleType(pr) ;
1836 gener->AddParticleType(ne) ;
1837 gener->AddParticleType(ks) ;
1838 gener->AddParticleType(kl) ;
1839 gener->AddParticleType(kp) ;
1840 gener->AddParticleType(km) ;
1844 TF1 *rpa = new TF1("gevsimPsiRndm","1", 0, 360);
1846 gener->SetPtRange(0., 9.) ; // Used for bin size in numerical integration
1847 gener->SetPhiRange(0, 360);
1849 gener->SetOrigin(0, 0, 0); // vertex position
1850 gener->SetSigma(0, 0, 5.3); // Sigma in (X,Y,Z) (cm) on IP position
1851 gener->SetCutVertexZ(1.); // Truncate at 1 sigma
1852 gener->SetVertexSmear(kPerEvent);
1853 gener->SetTrackingFlag(1);
1857 \textbf{Generator for e$^+$e$^-$ pairs in Pb--Pb collisions}
1859 In addition to strong interactions of heavy ions in central and
1860 peripheral collisions, ultra-peripheral collisions of ions give
1861 rise to coherent, mainly electromagnetic, interactions among which
1862 the dominant process is is the (multiple) e$^+$e$^-$-pair
1863 production \cite{MC:AlscherHT97}
1865 AA \to AA + n({\rm e}^+{\rm e}^-), \label{nee}
1867 where $n$ is the pair multiplicity. Most electron--positron pairs
1868 are produced into the very forward direction escaping the
1869 experiment. However, for Pb--Pb collisions at the LHC the
1870 cross-section of this process, about 230 \, ${\rm kb}$, is
1871 enormous. A sizable fraction of pairs produced with large-momentum
1872 transfer can contribute to the hit rate in the forward detectors
1873 increasing the occupancy or trigger rate. In order to study this
1874 effect an event generator for e$^+$e$^-$-pair production has
1875 been implemented in the AliRoot framework \cite{MC:Sadovsky}. The
1876 class \texttt{TEpEmGen} is a realisation of the \texttt{TGenerator}
1877 interface for external generators and wraps the FORTRAN code used
1878 to calculate the differential cross-section. \texttt{AliGenEpEmv1}
1879 derives from \texttt{AliGenerator} and uses the external generator to
1880 put the pairs on the AliRoot particle stack.
1883 \subsubsection{Combination of generators: AliGenCocktail}
1887 \includegraphics[width=10cm]{picts/cocktail}
1888 \caption{The \texttt{AliGenCocktail} generator is a realization of {\tt
1889 AliGenerator} which does not generate particles itself but
1890 delegates this task to a list of objects of type {\tt
1891 AliGenerator} that can be connected as entries ({\tt
1892 AliGenCocktailEntry}) at run time. In this way different physics
1893 channels can be combined in one event.} \label{MC:cocktail}
1896 Here is an example of cocktail, used for studies in the TRD detector:
1898 \begin{lstlisting}[language=C++]
1899 // The cocktail generator
1900 AliGenCocktail *gener = new AliGenCocktail();
1902 // Phi meson (10 particles)
1904 new AliGenParam(10,new AliGenMUONlib(),AliGenMUONlib::kPhi,"Vogt PbPb");
1905 phi->SetPtRange(0, 100);
1906 phi->SetYRange(-1., +1.);
1907 phi->SetForceDecay(kDiElectron);
1909 // Omega meson (10 particles)
1910 AliGenParam *omega =
1911 new AliGenParam(10,new AliGenMUONlib(),AliGenMUONlib::kOmega,"Vogt PbPb");
1912 omega->SetPtRange(0, 100);
1913 omega->SetYRange(-1., +1.);
1914 omega->SetForceDecay(kDiElectron);
1917 AliGenParam *jpsi = new AliGenParam(10,new AliGenMUONlib(),
1918 AliGenMUONlib::kJpsiFamily,"Vogt PbPb");
1919 jpsi->SetPtRange(0, 100);
1920 jpsi->SetYRange(-1., +1.);
1921 jpsi->SetForceDecay(kDiElectron);
1924 AliGenParam *ups = new AliGenParam(10,new AliGenMUONlib(),
1925 AliGenMUONlib::kUpsilonFamily,"Vogt PbPb");
1926 ups->SetPtRange(0, 100);
1927 ups->SetYRange(-1., +1.);
1928 ups->SetForceDecay(kDiElectron);
1930 // Open charm particles
1931 AliGenParam *charm = new AliGenParam(10,new AliGenMUONlib(),
1932 AliGenMUONlib::kCharm,"central");
1933 charm->SetPtRange(0, 100);
1934 charm->SetYRange(-1.5, +1.5);
1935 charm->SetForceDecay(kSemiElectronic);
1937 // Beauty particles: semi-electronic decays
1938 AliGenParam *beauty = new AliGenParam(10,new AliGenMUONlib(),
1939 AliGenMUONlib::kBeauty,"central");
1940 beauty->SetPtRange(0, 100);
1941 beauty->SetYRange(-1.5, +1.5);
1942 beauty->SetForceDecay(kSemiElectronic);
1944 // Beauty particles to J/psi ee
1945 AliGenParam *beautyJ = new AliGenParam(10, new AliGenMUONlib(),
1946 AliGenMUONlib::kBeauty,"central");
1947 beautyJ->SetPtRange(0, 100);
1948 beautyJ->SetYRange(-1.5, +1.5);
1949 beautyJ->SetForceDecay(kBJpsiDiElectron);
1951 // Adding all the components of the cocktail
1952 gener->AddGenerator(phi,"Phi",1);
1953 gener->AddGenerator(omega,"Omega",1);
1954 gener->AddGenerator(jpsi,"J/psi",1);
1955 gener->AddGenerator(ups,"Upsilon",1);
1956 gener->AddGenerator(charm,"Charm",1);
1957 gener->AddGenerator(beauty,"Beauty",1);
1958 gener->AddGenerator(beautyJ,"J/Psi from Beauty",1);
1960 // Settings, common for all components
1961 gener->SetOrigin(0, 0, 0); // vertex position
1962 gener->SetSigma(0, 0, 5.3); // Sigma in (X,Y,Z) (cm) on IP position
1963 gener->SetCutVertexZ(1.); // Truncate at 1 sigma
1964 gener->SetVertexSmear(kPerEvent);
1965 gener->SetTrackingFlag(1);
1970 \subsection{Particle transport}
1972 \subsubsection{TGeo essential information}
1974 A detailed description of the Root geometry package is available in
1975 the Root User's Guide\cite{RootUsersGuide}. Several examples can be
1976 found in \$ROOTSYS/tutorials, among them assembly.C, csgdemo.C,
1977 geodemo.C, nucleus.C, rootgeom.C, etc. Here we show a simple usage for
1978 export/import of the ALICE geometry and for check for overlaps and
1981 \begin{lstlisting}[language=C++]
1983 root [0] gAlice->Init()
1984 root [1] gGeoManager->Export("geometry.root")
1987 root [0] TGeoManager::Import("geometry.root")
1988 root [1] gGeoManager->CheckOverlaps()
1989 root [2] gGeoManager->PrintOverlaps()
1990 root [3] new TBrowser
1991 # Now you can navigate in Geometry->Illegal overlaps
1992 # and draw each overlap (double click on it)
1995 \subsubsection{Visualization}
1997 Below we show an example of VZERO visualization using the Root
2000 \begin{lstlisting}[language=C++]
2002 root [0] gAlice->Init()
2003 root [1] TGeoVolume *top = gGeoManager->GetMasterVolume()
2004 root [2] Int_t nd = top->GetNdaughters()
2005 root [3] for (Int_t i=0; i<nd; i++) \
2006 top->GetNode(i)->GetVolume()->InvisibleAll()
2007 root [4] TGeoVolume *v0ri = gGeoManager->GetVolume("V0RI")
2008 root [5] TGeoVolume *v0le = gGeoManager->GetVolume("V0LE")
2009 root [6] v0ri->SetVisibility(kTRUE);
2010 root [7] v0ri->VisibleDaughters(kTRUE);
2011 root [8] v0le->SetVisibility(kTRUE);
2012 root [9] v0le->VisibleDaughters(kTRUE);
2013 root [10] top->Draw();
2017 \subsubsection{Particle decays}
2019 We use Pythia to carry one particle decays during the transport. The
2020 default decay channels can be seen in the following way:
2022 \begin{lstlisting}[language=C++]
2024 root [0] AliPythia * py = AliPythia::Instance()
2025 root [1] py->Pylist(12); >> decay.list
2028 The file decay.list will contain the list of particles decays
2029 available in Pythia. Now if we want to force the decay $\Lambda^0 \to
2030 p \pi^-$, the following lines should be included in the Config.C
2031 before we register the decayer:
2033 \begin{lstlisting}[language=C++]
2034 AliPythia * py = AliPythia::Instance();
2035 py->SetMDME(1059,1,0);
2036 py->SetMDME(1060,1,0);
2037 py->SetMDME(1061,1,0);
2040 where 1059,1060 and 1061 are the indexes of the decay channel (from
2041 decay.list above) we want to switch off.
2043 \subsubsection{Examples}
2046 \textbf{Fast simulation}
2048 This example is taken from the macro
2049 \$ALICE\_ROOT/FASTSIM/fastGen.C. It shows how one can create a
2050 Kinematics tree which later can be used as input for the particle
2051 transport. A simple selection of events with high multiplicity is
2054 \lstinputlisting[language=C++] {scripts/fastGen.C}
2056 \textbf{Reading of kinematics tree as input for the particle transport}
2058 We suppose that the macro fastGen.C above has been used to generate
2059 the corresponding sent of files: galice.root and Kinematics.root, and
2060 that they are stored in a separate subdirectory, for example kine. Then
2061 the following code in Config.C will read the set of files and put them
2062 in the stack for transport:
2064 \begin{lstlisting}[language=C++]
2065 AliGenExtFile *gener = new AliGenExtFile(-1);
2067 gener->SetMomentumRange(0,14000);
2068 gener->SetPhiRange(0.,360.);
2069 gener->SetThetaRange(45,135);
2070 gener->SetYRange(-10,10);
2071 gener->SetOrigin(0, 0, 0); //vertex position
2072 gener->SetSigma(0, 0, 5.3); //Sigma in (X,Y,Z) (cm) on IP position
2074 AliGenReaderTreeK * reader = new AliGenReaderTreeK();
2075 reader->SetFileName("../galice.root");
2077 gener->SetReader(reader);
2078 gener->SetTrackingFlag(1);
2084 \textbf{Usage of different generators}
2086 A lot of examples are available in
2087 \$ALICE\_ROOT/macros/Config\_gener.C. The correspondent part can be
2088 extracted and placed in the relevant Config.C file.
2091 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2096 \section{Reconstruction}
2098 % -----------------------------------------------------------------------------
2100 \subsection{Reconstruction Framework}
2103 focuses on the reconstruction framework from the (detector) software
2104 developers point of view.
2106 Wherever it is not specified explicitly as different, we refer
2107 to the `global ALICE coordinate system'\cite{CoordinateSystem}. It is a right-handed coordinate
2109 the $z$ axis coinciding with the beam-pipe axis and going in the direction
2110 opposite to the muon arm, the $y$ axis going up, and the origin of
2111 coordinates defined by the intersection point of the $z$ axis
2112 and the central-membrane plane of TPC.
2114 Here is a reminder of the following terms which are used in the
2115 description of the reconstruction framework (see also section \ref{AliRootFramework}):
2117 \item {\it Digit}: This is a digitized signal (ADC count) obtained by
2118 a sensitive pad of a detector at a certain time.
2119 \item {\it Cluster}: This is a set of adjacent (in space and/or in time)
2120 digits that were presumably generated by the same particle crossing the
2121 sensitive element of a detector.
2122 \item Reconstructed {\it space point}: This is the estimation of the
2123 position where a particle crossed the sensitive element of a detector
2124 (often, this is done by calculating the center of gravity of the
2126 \item Reconstructed {\it track}: This is a set of five parameters (such as the
2127 curvature and the angles with respect to the coordinate axes) of the particle's
2128 trajectory together with the corresponding covariance matrix estimated at a given
2133 The input to the reconstruction framework are digits in root tree
2134 format or raw data format. First a local reconstruction of clusters is
2135 performed in each detector. Then vertexes and tracks are reconstructed
2136 and the particle identification is carried on. The output of the reconstruction
2137 is the Event Summary Data (ESD). The \class{AliReconstruction} class provides
2138 a simple user interface to the reconstruction framework which is
2139 explained in the source code and.
2143 \includegraphics[width=10cm]{picts/ReconstructionFramework}
2144 \caption{Reconstruction framework.} \label{MC:Reconstruction}
2147 \textbf{Requirements and Guidelines}
2149 The development of the reconstruction framework has been carried on
2150 according to the following requirements and guidelines:
2152 \item the prime goal of the reconstruction is to provide the data that
2153 is needed for a physics analysis;
2154 \item the reconstruction should be aimed for high efficiency, purity and resolution.
2155 \item the user should have an easy to use interface to extract the
2156 required information from the ESD;
2157 \item the reconstruction code should be efficient but also maintainable;
2158 \item the reconstruction should be as flexible as possible.
2159 It should be possible to do the reconstruction in one detector even in
2160 the case that other detectors are not operational.
2161 To achieve such a flexibility each detector module should be able to
2163 \item find tracks starting from seeds provided by another detector
2165 \item find tracks without using information from other detectors
2167 \item find tracks from external seeds and add tracks from internal seeds
2168 \item and propagate tracks through the detector using the already
2169 assigned clusters in inward and outward direction.
2171 \item where it is appropriate, common (base) classes should be used in
2172 the different reconstruction modules;
2173 \item the interdependencies between the reconstruction modules should
2175 If possible the exchange of information between detectors should be
2176 done via a common track class.
2177 \item the chain of reconstruction program(s) should be callable and
2178 steerable in an easy way;
2179 \item there should be no assumptions on the structure or names of files
2180 or on the number or order of events;
2181 \item each class, data member and method should have a correct,
2182 precise and helpful html documentation.
2187 \textbf{AliReconstructor}
2189 The interface from the steering class \class{AliReconstruction} to the
2190 detector specific reconstruction code is defined by the base class
2191 \class{AliReconstructor}. For each detector there is a derived reconstructor
2192 class. The user can set options for each reconstructor in format of a
2193 string parameter which is accessible inside the reconstructor via the
2196 The detector specific reconstructors are created via
2197 plugins. Therefore they must have a default constructor. If no plugin
2198 handler is defined by the user (in .rootrc), it is assumed that the
2199 name of the reconstructor for detector DET is AliDETReconstructor and
2200 that it is located in the library libDETrec.so (or libDET.so).
2205 If the input data is provided in format of root trees, either the
2206 loaders or directly the trees are used to access the digits. In case
2207 of raw data input the digits are accessed via a raw reader.
2209 If a galice.root file exists, the run loader will be retrieved from
2210 it. Otherwise the run loader and the headers will be created from the
2211 raw data. The reconstruction can not work if there is no galice.root file
2212 and no raw data input.
2215 \textbf{Output Data}
2217 The clusters (rec. points) are considered as intermediate output and
2218 are stored in root trees handled by the loaders. The final output of
2219 the reconstruction is a tree with objects of type \class{AliESD} stored in the
2220 file AliESDs.root. This Event Summary Data (ESD) contains lists of
2221 reconstructed tracks/particles and global event properties. The detailed
2222 description of the ESD can be found in section \ref{ESD}.
2225 \textbf{Local Reconstruction (Clusterization)}
2227 The first step of the reconstruction is the so called ``local
2228 reconstruction''. It is executed for each detector separately and
2229 without exchanging information with other detectors. Usually the
2230 clusterization is done in this step.
2232 The local reconstruction is invoked via the method \method{Reconstruct} of the
2233 reconstructor object. Each detector reconstructor runs the local
2234 reconstruction for all events. The local reconstruction method is
2235 only called if the method HasLocalReconstruction of the reconstructor
2238 Instead of running the local reconstruction directly on raw data, it
2239 is possible to first convert the raw data digits into a digits tree
2240 and then to call the \method{Reconstruct} method with a tree as input
2241 parameter. This conversion is done by the method ConvertDigits. The
2242 reconstructor has to announce that it can convert the raw data digits
2243 by returning kTRUE in the method \method{HasDigitConversion}.
2248 The current reconstruction of the primary-vertex
2249 position in ALICE is done using the information provided by the
2250 silicon pixel detectors, which constitute the two innermost layers of the
2253 The algorithm starts with looking at the
2254 distribution of the $z$ coordinates of the reconstructed space points
2255 in the first pixel layers.
2256 At a vertex $z$ coordinate $z_{\rm true} = 0$ the distribution is
2258 its centroid ($z_{\rm cen}$) is very close to the nominal
2259 vertex position. When the primary vertex is moved along the $z$ axis, an
2261 of hits will be lost and the centroid of the distribution no longer gives
2263 vertex position. However, for primary vertex locations not too far from
2265 (up to about 12~cm), the centroid of the distribution is still correlated to
2266 the true vertex position.
2267 The saturation effect at large $z_{\rm true}$ values of the vertex position
2268 ($z_{\rm true} = $12--15~cm)
2269 is, however, not critical, since this procedure is only meant to find a rough
2270 vertex position, in order to introduce some cut along $z$.
2272 To find the final vertex position,
2273 the correlation between the points $z_1$, $z_2$ in the two layers
2274 was considered. More details and performance studies are available in
2277 The primary vertex is reconstructed by a vertexer object derived from
2278 \class{AliVertexer}. After the local reconstruction was done for all detectors
2279 the vertexer method \method{FindVertexForCurrentEvent} is called for each
2280 event. It returns a pointer to a vertex object of type \class{AliESDVertex}.
2282 The vertexer object is created by the method \method{CreateVertexer} of the
2283 reconstructor. So far only the ITS is used to determine the primary
2284 vertex (\class{AliITSVertexerZ} class).
2286 The precision of the primary vertex reconstruction in the bending plane
2287 required for the reconstruction of D and B mesons in pp events
2288 can be achieved only after the tracking is done. The method is
2289 implemented in \class{AliITSVertexerTracks}. It is called as a second
2290 estimation of the primary vertex. The details of the algorithm can be
2291 found in Appendix \ref{VertexerTracks}.
2294 \textbf{Combined Track Reconstruction}
2295 The combined track reconstruction tries to accumulate the information from
2296 different detectors in order to optimize the track reconstruction performance.
2297 The result of this is stored in the combined track objects.
2298 The \class{AliESDTrack} class also
2299 provides the possibility to exchange information between detectors
2300 without introducing dependencies between the reconstruction modules.
2301 This is achieved by using just integer indexes pointing to the
2302 specific track objects, which on the other hand makes it possible to
2303 retrieve the full information if needed.
2304 The list of combined tracks can be kept in memory and passed from one
2305 reconstruction module to another.
2306 The storage of the combined tracks should be done in the standard way.
2308 The classes responsible for the reconstruction of tracks are derived
2309 from \class{AliTracker}. They are created by the method
2310 \method{CreateTracker} of the
2311 reconstructors. The reconstructed position of the primary vertex is
2312 made available to them via the method \method{SetVertex}. Before the track
2313 reconstruction in a detector starts the clusters are loaded from the
2314 clusters tree by the method \method{LoadClusters}. After the track reconstruction the
2315 clusters are unloaded by the method \method{UnloadClusters}.
2317 The track reconstruction (in the barrel part) is done in three passes. The first
2318 pass consists of a track finding and fitting in inward direction in
2319 TPC and then in ITS. The virtual method \method{Clusters2Tracks} (of
2320 class \class{AliTracker}) is the
2321 interface to this pass. The method for the next pass is
2322 \method{PropagateBack}. It does the track reconstruction in outward direction and is
2323 invoked for all detectors starting with the ITS. The last pass is the
2324 track refit in inward direction in order to get the track parameters
2325 at the vertex. The corresponding method \method{RefitInward} is called for TRD,
2326 TPC and ITS. All three track reconstruction methods have an AliESD object as
2327 argument which is used to exchange track information between detectors
2328 without introducing dependences between the code of the detector
2331 Depending on the way the information is used, the tracking methods can be
2332 divided into two large groups: global methods and local methods. Each
2333 group has advantages and disadvantages.
2335 With the global methods, all the track measurements are treated
2336 simultaneously and the decision to include or exclude a measurement is
2337 taken when all the information about the track is known.
2338 Typical algorithms belonging to this class are combinatorial methods,
2339 Hough transform, templates, conformal mappings. The advantages are
2340 the stability with respect to noise and mismeasurements and the possibility
2341 to operate directly on the raw data. On the other hand, these methods
2342 require a precise global track model. Such a track model can sometimes be
2343 unknown or does not even exist because of stochastic processes (energy
2344 losses, multiple scattering), non-uniformity of the magnetic field etc.
2345 In ALICE, global tracking methods are being extensively used in the
2346 High-Level Trigger (HLT) software. There, we
2347 are mostly interested in the reconstruction of the high-momentum tracks
2348 only, the required precision is not crucial, but the speed of the
2349 calculations is of great importance.
2352 Local methods do not need the knowledge of the global track model.
2353 The track parameters are always estimated `locally' at a given point
2354 in space. The decision to accept or to reject a measurement is made using
2355 either the local information or the information coming from the previous
2356 `history' of this track. With these methods, all the local track
2357 peculiarities (stochastic physics processes, magnetic fields, detector
2358 geometry) can be naturally accounted for. Unfortunately, the local methods
2359 rely on sophisticated space point reconstruction algorithms (including
2360 unfolding of overlapped clusters). They are sensitive to noise, wrong or
2361 displaced measurements and the precision of space point error parameterization.
2362 The most advanced kind of local track-finding methods is Kalman
2363 filtering which was introduced by P. Billoir in 1983~\cite{MC:billoir}.
2367 When applied to the track reconstruction problem, the Kalman-filter
2368 approach shows many attractive properties:
2371 \item It is a method for simultaneous track recognition and
2374 \item There is a possibility to reject incorrect space points `on
2375 the fly', during a single tracking pass. These incorrect points can
2376 appear as a consequence of the imperfection of the cluster finder or
2377 they may be due to noise or they may be points from other tracks
2378 accidentally captured in the list of points to be associated with
2379 the track under consideration. In the other tracking methods one
2380 usually needs an additional fitting pass to get rid of incorrectly
2383 \item In the case of substantial multiple scattering, track
2384 measurements are correlated and therefore large matrices (of the
2385 size of the number of measured points) need to be inverted during
2386 a global fit. In the Kalman-filter procedure we only have to
2387 manipulate up to $5 \times 5$ matrices (although as many times as
2388 we have measured space points), which is much faster.
2390 \item One can handle multiple scattering and
2391 energy losses in a simpler way than in the case of global
2392 methods. At each step the material budget can be calculated and the
2393 mean correction calculated accordingly.
2395 \item It is a natural way to find the extrapolation
2396 of a track from one detector to another (for example from the TPC
2397 to the ITS or to the TRD).
2401 In ALICE we require good track-finding efficiency and reconstruction
2402 precision for track down to \mbox{\pt = 100 MeV/$c$.} Some of the ALICE
2403 tracking detectors (ITS, TRD) have a significant material budget.
2404 Under such conditions one can not neglect the energy losses or the multiple
2405 scattering in the reconstruction. There are also rather
2406 big dead zones between the tracking detectors which complicates finding
2407 the continuation of the same track. For all these reasons,
2408 it is the Kalman-filtering approach that has been our choice for the
2409 offline reconstruction since 1994.
2411 % \subsubsection{General tracking strategy}
2413 The reconstruction software for the ALICE central tracking detectors (the
2414 ITS, TPC and the TRD) shares a common convention on the coordinate
2415 system used. All the clusters and tracks are always expressed in some local
2416 coordinate system related to a given sub-detector (TPC sector, ITS module
2417 etc). This local coordinate system is defined as the following:
2419 \item It is a right handed-Cartesian coordinate system;
2420 \item its origin and the $z$ axis coincide with those of the global
2421 ALICE coordinate system;
2422 \item the $x$ axis is perpendicular to the sub-detector's `sensitive plane'
2423 (TPC pad row, ITS ladder etc).
2425 Such a choice reflects the symmetry of the ALICE set-up
2426 and therefore simplifies the reconstruction equations.
2427 It also enables the fastest possible transformations from
2428 a local coordinate system to the global one and back again,
2429 since these transformations become simple single rotations around the
2433 The reconstruction begins with cluster finding in all of the ALICE central
2434 detectors (ITS, TPC, TRD, TOF, HMPID and PHOS). Using the clusters
2435 reconstructed at the two pixel layers of the ITS, the position of the
2436 primary vertex is estimated and the track finding starts. As
2437 described later, cluster-finding as well as the track-finding procedures
2438 performed in the detectors have some different detector-specific features.
2439 Moreover, within a given detector, on account of high occupancy and a big
2440 number of overlapped clusters, the cluster finding and the track finding are
2441 not completely independent: the number and positions of the clusters are
2442 completely determined only at the track-finding step.
2444 The general tracking strategy is the following. We start from our
2445 best tracker device, i.e. the TPC, and from the outer radius where the
2446 track density is minimal. First, the track candidates (`seeds') are
2447 found. Because of the small number of clusters assigned to a seed, the
2448 precision of its parameters is not enough to safely extrapolate it outwards
2449 to the other detectors. Instead, the tracking stays within the TPC and
2450 proceeds towards the smaller TPC radii. Whenever
2451 possible, new clusters are associated with a track candidate
2452 at each step of the Kalman filter if they are within a given distance
2453 from the track prolongation and the track parameters are more and
2454 more refined. When all of the seeds are extrapolated to the inner limit of
2455 the TPC, proceeds into the ITS. The ITS tracker tries to prolong
2456 the TPC tracks as close as possible to the primary vertex.
2457 On the way to the primary vertex, the tracks are assigned additional,
2458 precisely reconstructed ITS clusters, which also improves
2459 the estimation of the track parameters.
2461 After all the track candidates from the TPC are assigned their clusters
2462 in the ITS, a special ITS stand-alone tracking procedure is applied to
2463 the rest of the ITS clusters. This procedure tries to recover the
2464 tracks that were not found in the TPC because of the \pt cut-off, dead zones
2465 between the TPC sectors, or decays.
2467 At this point the tracking is restarted from the vertex back to the
2468 outer layer of the ITS and then continued towards the outer wall of the
2469 TPC. For the track that was labeled by the ITS tracker as potentially
2470 primary, several particle-mass-dependent, time-of-flight hypotheses
2471 are calculated. These hypotheses are then used for the particle
2472 identification (PID) with the TOF detector. Once the outer
2473 radius of the TPC is reached, the precision of the estimated track
2475 sufficient to extrapolate the tracks to the TRD, TOF, HMPID and PHOS
2476 detectors. Tracking in the TRD is done in a similar way to that
2477 in the TPC. Tracks are followed till the outer wall of the TRD and the
2478 assigned clusters improve the momentum resolution further.
2480 % matching with the TOF, HMPID and PHOS is done, and the tracks aquire
2481 % additional PID information.
2482 Next, the tracks are extrapolated to the TOF, HMPID and PHOS, where they
2483 acquire the PID information.
2484 Finally, all the tracks are refitted with the Kalman filter backwards to
2485 the primary vertex (or to the innermost possible radius, in the case of
2486 the secondary tracks). This gives the most precise information about
2487 the track parameters at the point where the track appeared.
2489 The tracks that passed the final refit towards the primary vertex are used
2490 for the secondary vertex (V$^0$, cascade, kink) reconstruction. There is also
2491 an option to reconstruct the secondary vertexes `on the fly' during the
2492 tracking itself. The potential advantage of such a possibility is that
2493 the tracks coming from a secondary vertex candidate are not extrapolated
2494 beyond the vertex, thus minimizing the risk of picking up a wrong track
2495 prolongation. This option is currently under investigation.
2497 The reconstructed tracks (together with the PID information), kink, V$^0$
2498 and cascade particle decays are then stored in the Event Summary Data (ESD).
2500 More details about the reconstruction algorithms can be found in
2501 Chapter 5 of the ALICE Physics Performance Report\cite{PPRVII}.
2504 \textbf{Filling of ESD}
2506 After the tracks were reconstructed and stored in the \class{AliESD} object,
2507 further information is added to the ESD. For each detector the method
2508 \method{FillESD} of the reconstructor is called. Inside this method e.g. V0s
2509 are reconstructed or particles are identified (PID). For the PID a
2510 Bayesian approach is used (see Appendix \ref{BayesianPID}. The constants
2511 and some functions that are used for the PID are defined in the class
2515 \textbf{Monitoring of Performance}
2517 For the monitoring of the track reconstruction performance the classes
2518 \class{AliTrackReference} are used.
2519 Objects of the second type of class are created during the
2520 reconstruction at the same locations as the \class{AliTrackReference}
2522 So the reconstructed tracks can be easily compared with the simulated
2524 This allows to study and monitor the performance of the track reconstruction in detail.
2525 The creation of the objects used for the comparison should not
2526 interfere with the reconstruction algorithm and can be switched on or
2529 Several ``comparison'' macros permit to monitor the efficiency and the
2530 resolution of the tracking. Here is a typical usage (the simulation
2531 and the reconstruction have been done in advance):
2533 \begin{lstlisting}[language=C++]
2535 root [0] gSystem->SetIncludePath("-I$ROOTSYS/include \
2536 -I$ALICE_ROOT/include \
2540 root [1] .L $ALICE_ROOT/TPC/AliTPCComparison.C++
2541 root [2] .L $ALICE_ROOT/ITS/AliITSComparisonV2.C++
2542 root [3] .L $ALICE_ROOT/TOF/AliTOFComparison.C++
2543 root [4] AliTPCComparison()
2544 root [5] AliITSComparisonV2()
2545 root [6] AliTOFComparison()
2548 Another macro can be used to provide a preliminary estimate of the
2549 combined acceptance: \texttt{STEER/CheckESD.C}.
2553 The following classes are used in the reconstruction:
2555 \item \class{AliTrackReference}:
2556 This class is used to store the position and the momentum of a
2557 simulated particle at given locations of interest (e.g. when the
2558 particle enters or exits a detector or it decays). It is used for
2559 mainly for debugging and tuning of the tracking.
2561 \item \class{AliExternalTrackParams}:
2562 This class describes the status of a track in a given point.
2563 It knows the track parameters and its covariance matrix.
2564 This parameterization is used to exchange tracks between the detectors.
2565 A set of functions returning the position and the momentum of tracks
2566 in the global coordinate system as well as the track impact parameters
2567 are implemented. There is possibility to propagate the track to a
2568 given radius \method{PropagateTo} and \method{Propagate}.
2570 \item \class{AliKalmanTrack} and derived classes:
2571 These classes are used to find and fit tracks with the Kalman approach.
2572 The \class{AliKalmanTrack} defines the interfaces and implements some
2573 common functionality. The derived classes know about the clusters
2574 assigned to the track. They also update the information in an
2575 \class{AliESDtrack}.
2576 The current status of the track during the track reconstruction can be
2577 represented by an \class{AliExternalTrackParameters}.
2578 The history of the track during the track reconstruction can be stored
2579 in a list of \class{AliExternalTrackParameters} objects.
2580 The \class{AliKalmanTrack} defines the methods:
2582 \item \method{Double\_t GetDCA(...)} Returns the distance
2583 of closest approach between this track and the track passed as the
2585 \item \method{Double\_t MeanMaterialBudget(...)} Calculate the mean
2586 material budget and material properties between two points.
2589 \item \class{AliTracker} and subclasses:
2590 The \class{AliTracker} is the base class for all the trackers in the
2591 different detectors. It fixes the interface needed to find and
2592 propagate tracks. The actual implementation is done in the derived classes.
2594 \item \class{AliESDTrack}:
2595 This class combines the information about a track from different detectors.
2596 It knows the current status of the track
2597 (\class{AliExternalTrackParameters}) and it has (non-persistent) pointers
2598 to the individual \class{AliKalmanTrack} objects from each detector
2599 which contributed to the track.
2600 It knows about some detector specific quantities like the number or
2601 bit pattern of assigned clusters, dEdx, $\chi^2$, etc..
2602 And it can calculate a conditional probability for a given mixture of
2603 particle species following the Bayesian approach.
2604 It defines a track label pointing to the corresponding simulated
2605 particle in case of \MC.
2606 The combined track objects are the basis for a physics analysis.
2613 The example below shows reconstruction with non-uniform magnetic field
2614 (the simulation is also done with non-uniform magnetic field by adding
2615 the following line in the Config.C: field$\to$SetL3ConstField(1)). Only
2616 the barrel detectors are reconstructed, a specific TOF reconstruction
2617 has been requested, and the RAW data have been used:
2619 \begin{lstlisting}[language=C++]
2621 AliReconstruction reco;
2623 reco.SetRunReconstruction("ITS TPC TRD TOF");
2624 reco.SetNonuniformFieldTracking();
2625 reco.SetInput("raw.root");
2631 % -----------------------------------------------------------------------------
2633 \subsection{Event summary data}\label{ESD}
2635 The classes which are needed to process and analyze the ESD are packed
2636 together in a standalone library (libESD.so) which can be used
2637 separately from the \aliroot framework. Inside each
2638 ESD object the data is stored in polymorphic containers filled with
2639 reconstructed tracks, neutral particles, etc. The main class is
2640 \class{AliESD}, which contains all the information needed during the
2644 \item fields to identify the event such as event number, run number,
2645 time stamp, type of event, trigger type (mask), trigger cluster (mask),
2646 version of reconstruction, etc.;
2647 \item reconstructed ZDC energies and number of participant;
2648 \item primary vertex information: vertex z position estimated by the START,
2649 primary vertex estimated by the SPD, primary vertex estimated using
2651 \item SPD tracklet multiplicity;
2652 \item interaction time estimated by the START together with additional
2653 time and amplitude information from START;
2654 \item array of ESD tracks;
2655 \item arrays of HLT tracks both from the conformal mapping and from
2656 the Hough transform reconstruction;
2657 \item array of MUON tracks;
2658 \item array of PMD tracks;
2659 \item array of TRD ESD tracks (triggered);
2660 \item arrays of reconstructed $V^0$ vertexes, cascade decays and
2662 \item array of calorimeter clusters for PHOS/EMCAL;
2663 \item indexes of the information from PHOS and EMCAL detectors in the
2667 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2673 % -----------------------------------------------------------------------------
2675 \subsection{Introduction}
2676 The analysis of experimental data is the final stage of event
2677 processing and it is usually repeated many times. Analysis is a very diverse
2678 activity, where the goals of each
2679 particular analysis pass may differ significantly.
2681 The ALICE detector \cite{PPR} is optimized for the
2682 reconstruction and analysis of heavy-ion collisions.
2683 In addition, ALICE has a broad physics programme devoted to
2684 \pp and \pA interactions.
2687 The data analysis is coordinated by the Physics Board via the Physics
2688 Working Groups (PWGs). At present the following PWG have started
2692 \item PWG0 \textbf{first physics};
2693 \item PWG1 \textbf{detector performance};
2694 \item PWG2 \textbf{global event characteristics:} particle multiplicity,
2695 centrality, energy density, nuclear stopping; \textbf{soft physics:} chemical composition (particle and resonance
2696 production, particle ratios and spectra, strangeness enhancement),
2697 reaction dynamics (transverse and elliptic flow, HBT correlations,
2698 event-by-event dynamical fluctuations);
2699 \item PWG3 \textbf{heavy flavors:} quarkonia, open charm and beauty production.
2700 \item PWG4 \textbf{hard probes:} jets, direct photons;
2703 Each PWG has corresponding module in AliRoot (PWG0 -- PWG4). The code
2704 is managed by CVS administrators.
2706 The \pp and \pA programme will provide, on the one hand, reference points
2707 for comparison with heavy ions. On the other hand, ALICE will also
2708 pursue genuine and detailed \pp studies. Some
2709 quantities, in particular the global characteristics of interactions, will
2710 be measured during the first days of running exploiting the low-momentum
2711 measurement and particle identification capabilities of ALICE.
2713 The ALICE computing framework is described in details in the Computing
2714 Technical Design Report \cite{CompTDR}. This article is based on
2715 Chapter 6 of the document.
2718 \paragraph{The analysis activity.}
2720 We distinguish two main types of analysis: scheduled analysis and
2721 chaotic analysis. They differ in their data access pattern, in the
2722 storage and registration of the results, and in the frequency of
2723 changes in the analysis code {more details are available below).
2725 In the ALICE Computing Model the analysis starts from the Event Summary
2726 Data (ESD). These are produced during the reconstruction step and contain
2727 all the information for the analysis. The size of the ESD is
2728 about one order of magnitude lower than the corresponding raw
2729 data. The analysis tasks produce Analysis
2730 Object Data (AOD) specific to a given set of physics objectives.
2731 Further passes for the specific analysis activity can be performed on
2732 the AODs, until the selection parameter or algorithms are changed.
2734 A typical data analysis task usually requires processing of
2735 selected sets of events. The selection is based on the event
2736 topology and characteristics, and is done by querying the tag
2737 database. The tags represent physics quantities which characterize
2738 each run and event, and permit fast selection. They are created
2739 after the reconstruction and contain also the unique
2740 identifier of the ESD file. A typical query, when translated into
2741 natural language, could look like ``Give me
2742 all the events with impact parameter in $<$range$>$
2743 containing jet candidates with energy larger than $<$threshold$>$''.
2744 This results in a list of events and file identifiers to be used in the
2745 consecutive event loop.
2748 The next step of a typical analysis consists of a loop over all the events
2749 in the list and calculation of the physics quantities of
2750 interest. Usually, for each event, there is a set of embedded loops on the
2751 reconstructed entities such as tracks, ${\rm V^0}$ candidates, neutral
2752 clusters, etc., the main goal of which is to select the signal
2753 candidates. Inside each loop a number of criteria (cuts) are applied to
2754 reject the background combinations and to select the signal ones. The
2755 cuts can be based on geometrical quantities such as impact parameters
2757 respect to the primary vertex, distance between the cluster and the
2758 closest track, distance of closest approach between the tracks,
2759 angle between the momentum vector of the particle combination
2760 and the line connecting the production and decay vertexes. They can
2762 kinematics quantities such as momentum ratios, minimal and maximal
2763 transverse momentum,
2764 angles in the rest frame of the particle combination.
2765 Particle identification criteria are also among the most common
2768 The optimization of the selection criteria is one of the most
2769 important parts of the analysis. The goal is to maximize the
2770 signal-to-background ratio in case of search tasks, or another
2771 ratio (typically ${\rm Signal/\sqrt{Signal+Background}}$) in
2772 case of measurement of a given property. Usually, this optimization is
2773 performed using simulated events where the information from the
2774 particle generator is available.
2776 After the optimization of the selection criteria, one has to take into
2777 account the combined acceptance of the detector. This is a complex,
2778 analysis-specific quantity which depends on the geometrical acceptance,
2779 the trigger efficiency, the decays of particles, the reconstruction
2780 efficiency, the efficiency of the particle identification and of the
2781 selection cuts. The components of the combined acceptance are usually
2782 parameterized and their product is used to unfold the experimental
2783 distributions or during the simulation of some model parameters.
2785 The last part of the analysis usually involves quite complex
2786 mathematical treatments, and sophisticated statistical tools. Here one
2787 may include the correction for systematic effects, the estimation of
2788 statistical and systematic errors, etc.
2791 \paragraph{Scheduled analysis.}
2793 The scheduled analysis typically uses all
2794 the available data from a given period, and stores and registers the results
2795 using \grid middleware. The tag database is updated accordingly. The
2796 AOD files, generated during the scheduled
2797 analysis, can be used by several subsequent analyses, or by a class of
2798 related physics tasks.
2799 The procedure of scheduled analysis is centralized and can be
2800 considered as data filtering. The requirements come from the PWGs and
2801 are prioritized by the Physics Board taking into
2802 account the available computing and storage resources. The analysis
2803 code is tested in advance and released before the beginning of the
2806 Each PWG will require some sets of
2807 AOD per event, which are specific for one or
2808 a few analysis tasks. The creation of the AOD sets is managed centrally.
2809 The event list of each AOD set
2810 will be registered and the access to the AOD files will be granted to
2811 all ALICE collaborators. AOD files will be generated
2812 at different computing centers and will be stored on
2813 the corresponding storage
2814 elements. The processing of each file set will thus be done in a
2815 distributed way on the \grid. Some of the AOD sets may be quite small
2816 and would fit on a single storage element or even on one computer; in
2817 this case the corresponding tools for file replication, available
2818 in the ALICE \grid infrastructure, will be used.
2821 \paragraph{Chaotic analysis.}
2823 The chaotic analysis is focused on a single physics task and
2824 typically is based on the filtered data from the scheduled
2825 analysis. Each physicist also
2826 may access directly large parts of the ESD in order to search for rare
2827 events or processes.
2828 Usually the user develops the code using a small subsample
2829 of data, and changes the algorithms and criteria frequently. The
2830 analysis macros and software are tested many times on relatively
2831 small data volumes, both experimental and \MC.
2832 The output is often only a set of histograms.
2833 Such a tuning of the analysis code can be done on a local
2834 data set or on distributed data using \grid tools. The final version
2836 will eventually be submitted to the \grid and will access large
2838 the totality of the ESDs. The results may be registered in the \grid file
2839 catalog and used at later stages of the analysis.
2840 This activity may or may not be coordinated inside
2841 the PWGs, via the definition of priorities. The
2842 chaotic analysis is carried on within the computing resources of the
2846 % -----------------------------------------------------------------------------
2848 \subsection{Infrastructure tools for distributed analysis}
2850 \subsubsection{gShell}
2852 The main infrastructure tools for distributed analysis have been
2853 described in Chapter 3 of the Computing TDR\cite{CompTDR}. The actual
2854 middleware is hidden by an interface to the \grid,
2855 gShell\cite{CH6Ref:gShell}, which provides a
2856 single working shell.
2857 The gShell package contains all the commands a user may need for file
2858 catalog queries, creation of sub-directories in the user space,
2859 registration and removal of files, job submission and process
2860 monitoring. The actual \grid middleware is completely transparent to
2863 The gShell overcomes the scalability problem of direct client
2864 connections to databases. All clients connect to the
2865 gLite\cite{CH6Ref:gLite} API
2866 services. This service is implemented as a pool of preforked server
2867 daemons, which serve single-client requests. The client-server
2868 protocol implements a client state which is represented by a current
2869 working directory, a client session ID and time-dependent symmetric
2870 cipher on both ends to guarantee client privacy and security. The
2871 server daemons execute client calls with the identity of the connected
2874 \subsubsection{PROOF -- the Parallel ROOT Facility}
2876 The Parallel ROOT Facility, PROOF\cite{CH6Ref:PROOF} has been specially
2877 designed and developed
2878 to allow the analysis and mining of very large data sets, minimizing
2879 response time. It makes use of the inherent parallelism in event data
2880 and implements an architecture that optimizes I/O and CPU utilization
2881 in heterogeneous clusters with distributed storage. The system
2882 provides transparent and interactive access to terabyte-scale data
2883 sets. Being part of the ROOT framework, PROOF inherits the benefits of
2884 a performing object storage system and a wealth of statistical and
2885 visualization tools.
2886 The most important design features of PROOF are:
2889 \item transparency -- no difference between a local ROOT and
2890 a remote parallel PROOF session;
2891 \item scalability -- no implicit limitations on number of computers
2893 \item adaptability -- the system is able to adapt to variations in the
2897 PROOF is based on a multi-tier architecture: the ROOT client session,
2898 the PROOF master server, optionally a number of PROOF sub-master
2899 servers, and the PROOF worker servers. The user connects from the ROOT
2900 session to a master server on a remote cluster and the master server
2901 creates sub-masters and worker servers on all the nodes in the
2902 cluster. All workers process queries in parallel and the results are
2903 presented to the user as coming from a single server.
2905 PROOF can be run either in a purely interactive way, with the user
2906 remaining connected to the master and worker servers and the analysis
2907 results being returned to the user's ROOT session for further
2908 analysis, or in an `interactive batch' way where the user disconnects
2909 from the master and workers (see Fig.~\vref{CH3Fig:alienfig7}). By
2910 reconnecting later to the master server the user can retrieve the
2911 analysis results for that particular
2912 query. This last mode is useful for relatively long running queries
2913 (several hours) or for submitting many queries at the same time. Both
2914 modes will be important for the analysis of ALICE data.
2918 \includegraphics[width=11.5cm]{picts/alienfig7}
2919 \caption{Setup and interaction with the \grid middleware of a user
2920 PROOF session distributed over many computing centers.}
2921 \label{CH3Fig:alienfig7}
2924 % -----------------------------------------------------------------------------
2926 \subsection{Analysis tools}
2928 This section is devoted to the existing analysis tools in \ROOT and
2929 \aliroot. As discussed in the introduction, some very broad
2930 analysis tasks include the search for some rare events (in this case the
2931 physicist tries to maximize the signal-over-background ratio), or
2932 measurements where it is important to maximize the signal
2933 significance. The tools that provide possibilities to apply certain
2934 selection criteria and to find the interesting combinations within
2935 a given event are described below. Some of them are very general and are
2936 used in many different places, for example the statistical
2937 tools. Others are specific to a given analysis.
2939 \subsubsection{Statistical tools}
2941 Several commonly used statistical tools are available in
2942 \ROOT\cite{ROOT}. \ROOT provides
2943 classes for efficient data storage and access, such as trees
2945 ESD information is organized in a tree, where each event is a separate
2946 entry. This allows a chain of the ESD files to be made and the
2947 elaborated selector mechanisms to be used in order to exploit the PROOF
2948 services. The tree classes
2949 permit easy navigation, selection, browsing, and visualization of the
2950 data in the branches.
2952 \ROOT also provides histogramming and fitting classes, which are used
2953 for the representation of all the one- and multi-dimensional
2954 distributions, and for extraction of their fitted parameters. \ROOT provides
2955 an interface to powerful and robust minimization packages, which can be
2956 used directly during some special parts of the analysis. A special
2957 fitting class allows one to decompose an experimental histogram as a
2958 superposition of source histograms.
2960 \ROOT also has a set of sophisticated statistical analysis tools such as
2961 principal component analysis, robust estimator, and neural networks.
2962 The calculation of confidence levels is provided as well.
2964 Additional statistical functions are included in \texttt{TMath}.
2966 \subsubsection{Calculations of kinematics variables}
2968 The main \ROOT physics classes include 3-vectors and Lorentz
2969 vectors, and operations
2970 such as translation, rotation, and boost. The calculations of
2971 kinematics variables
2972 such as transverse and longitudinal momentum, rapidity,
2973 pseudorapidity, effective mass, and many others are provided as well.
2976 \subsubsection{Geometrical calculations}
2978 There are several classes which can be used for
2979 measurement of the primary vertex: \texttt{AliITSVertexerZ},
2980 \texttt{AliITSVertexerIons}, \texttt{AliITSVertexerTracks}, etc. A fast estimation of the {\it z}-position can be
2981 done by \texttt{AliITSVertexerZ}, which works for both lead--lead
2982 and proton--proton collisions. An universal tool is provided by
2983 \texttt{AliITSVertexerTracks}, which calculates the position and
2984 covariance matrix of the primary vertex based on a set of tracks, and
2985 also estimates the $\chi^2$ contribution of each track. An iterative
2986 procedure can be used to remove the secondary tracks and improve the
2989 Track propagation to the primary vertex (inward) is provided in
2992 The secondary vertex reconstruction in case of ${\rm V^0}$ is provided by
2993 \texttt{AliV0vertexer}, and in case of cascade hyperons by
2994 \texttt{AliCascadeVertexer}. An universal tool is
2995 \texttt{AliITSVertexerTracks}, which can be used also to find secondary
2996 vertexes close to the primary one, for example decays of open charm
2997 like ${\rm D^0 \to K^- \pi^+}$ or ${\rm D^+ \to K^- \pi^+ \pi^+}$. All
2999 reconstruction classes also calculate distance of closest approach (DCA)
3000 between the track and the vertex.
3002 The calculation of impact parameters with respect to the primary vertex
3003 is done during the reconstruction and the information is available in
3004 \texttt{AliESDtrack}. It is then possible to recalculate the
3005 impact parameter during the ESD analysis, after an improved determination
3006 of the primary vertex position using reconstructed ESD tracks.
3008 \subsubsection{Global event characteristics}
3010 The impact parameter of the interaction and the number of participants
3011 are estimated from the energy measurements in the ZDC. In addition,
3012 the information from the FMD, PMD, and T0 detectors is available. It
3013 gives a valuable estimate of the event multiplicity at high rapidities
3014 and permits global event characterization. Together with the ZDC
3015 information it improves the determination of the impact parameter,
3016 number of participants, and number of binary collisions.
3018 The event plane orientation is calculated by the \texttt{AliFlowAnalysis} class.
3020 \subsubsection{Comparison between reconstructed and simulated parameters}
3022 The comparison between the reconstructed and simulated parameters is
3023 an important part of the analysis. It is the only way to estimate the
3024 precision of the reconstruction. Several example macros exist in
3025 \aliroot and can be used for this purpose: \texttt{AliTPCComparison.C},
3026 \texttt{AliITSComparisonV2.C}, etc. As a first step in each of these
3027 macros the list of so-called `good tracks' is built. The definition of
3028 a good track is explained in detail in the ITS\cite{CH6Ref:ITS_TDR} and
3029 TPC\cite{CH6Ref:TPC_TDR} Technical Design
3030 Reports. The essential point is that the track
3031 goes through the detector and can be reconstructed. Using the `good
3032 tracks' one then estimates the efficiency of the reconstruction and
3035 Another example is specific to the MUON arm: the \texttt{MUONRecoCheck.C}
3036 macro compares the reconstructed muon tracks with the simulated ones.
3038 There is also the possibility to calculate directly the resolutions without
3039 additional requirements on the initial track. One can use the
3040 so-called track label and retrieve the corresponding simulated
3041 particle directly from the particle stack (\texttt{AliStack}).
3043 \subsubsection{Event mixing}
3045 One particular analysis approach in heavy-ion physics is the
3046 estimation of the combinatorial background using event mixing. Part of the
3047 information (for example the positive tracks) is taken from one
3048 event, another part (for example the negative tracks) is taken from
3050 `similar' event. The event `similarity' is very important, because
3051 only in this case the combinations produced from different events
3052 represent the combinatorial background. Typically `similar' in
3053 the example above means with the same multiplicity of negative
3054 tracks. One may require in addition similar impact parameters of the
3055 interactions, rotation of the tracks of the second event to adjust the
3056 event plane, etc. The possibility for event mixing is provided in
3057 \aliroot by the fact that the ESD is stored in trees and one can chain
3058 and access simultaneously many ESD objects. Then the first pass would
3059 be to order the events according to the desired criterion of
3060 `similarity' and to use the obtained index for accessing the `similar'
3061 events in the embedded analysis loops. An example of event mixing is
3062 shown in Fig.~\ref{CH6Fig:phipp}. The background distribution has been
3063 obtained using `mixed events'. The signal distribution has been taken
3064 directly from the \MC simulation. The `experimental distribution' has
3065 been produced by the analysis macro and decomposed as a
3066 superposition of the signal and background histograms.
3070 \includegraphics*[width=120mm]{picts/phipp}
3071 \caption{Mass spectrum of the ${\rm \phi}$ meson candidates produced
3072 inclusively in the proton--proton interactions.}
3073 \label{CH6Fig:phipp}
3077 \subsubsection{Analysis of the High-Level Trigger (HLT) data}
3079 This is a specific analysis which is needed in order to adjust the cuts
3080 in the HLT code, or to estimate the HLT
3081 efficiency and resolution. \aliroot provides a transparent way of doing
3082 such an analysis, since the HLT information is stored in the form of ESD
3083 objects in a parallel tree. This also helps in the monitoring and
3084 visualization of the results of the HLT algorithms.
3088 \subsubsection{EVE -- Event Visualization Environment}
3092 \item small application kernel;
3093 \item graphics classes with editors and OpenGL renderers;
3094 \item CINT scripts that extract data, fill graphics classes and register
3095 them to the application.
3098 The framework is still evolving ... some things might not work as expected.
3103 \item Initialize ALICE environment.
3104 \item Spawn 'alieve' executable and invoke the alieve\_init.C macro,
3107 To load first event from current directory:
3108 \begin{lstlisting}[language=sh]
3109 # alieve alieve\_init.C
3111 To load 5th event from directory /data/my-pp-run:
3112 \begin{lstlisting}[language=sh]
3113 # alieve 'alieve\_init.C("/data/my-pp-run", 5)'
3116 \begin{lstlisting}[language=sh]
3118 root[0] .L alieve\_init.C
3119 root[1] alieve\_init("/somedir")
3122 \item Use GUI or CINT command-line to invoke further visualization macros.
3123 \item To navigate the events use macros 'event\_next.C' and 'event\_prev.C'.
3124 These are equivalent to the command-line invocations:
3125 \begin{lstlisting}[language=sh]
3126 root[x] Alieve::gEvent->NextEvent()
3129 \begin{lstlisting}[language=sh]
3130 root[x] Alieve::gEvent->PrevEvent()
3132 The general form to go to event via its number is:
3133 \begin{lstlisting}[language=sh]
3134 root[x] Alieve::gEvent->GotoEvent(<event-number>)
3138 See files in EVE/alice-macros/. For specific uses these should be
3139 edited to suit your needs.
3141 \underline{Directory structure}
3143 EVE is split into two modules: REVE (ROOT part, not dependent on
3144 AliROOT) and ALIEVE (ALICE specific part). For the time being both
3145 modules are kept in AliROOT CVS.
3147 Alieve/ and Reve/ -- sources
3149 macros/ -- macros for bootstraping and internal steering\\
3150 alice-macros/ -- macros for ALICE visualization\\
3151 alica-data/ -- data files used by ALICE macros\\
3152 test-macros/ -- macros for tests of specific features; usually one needs
3153 to copy and edit them\\
3154 bin/, Makefile and make\_base.inc are used for stand-alone build of the
3160 \item Problems with macro-execution
3162 A failed macro-execution can leave CINT in a poorly defined state that
3163 prevents further execution of macros. For example:
3165 \begin{lstlisting}[language=sh]
3166 Exception Reve::Exc_t: Event::Open failed opening ALICE ESDfriend from
3167 '/alice-data/coctail_10k/AliESDfriends.root'.
3169 root [1] Error: Function MUON_geom() is not defined in current scope :0:
3170 *** Interpreter error recovered ***
3171 Error: G__unloadfile() File "/tmp/MUON_geom.C" not loaded :0:
3174 'gROOT$\to$Reset()' helps in most of the cases.
3177 % ------------------------------------------------------------------------------
3180 \subsection{Existing analysis examples in \aliroot}
3182 There are several dedicated analysis tools available in \aliroot. Their results
3183 were used in the Physics Performance Report and described in
3184 ALICE internal notes. There are two main classes of analysis: the
3185 first one based directly on ESD, and the second one extracting first
3186 AOD, and then analyzing it.
3189 \item\textbf{ESD analysis }
3192 \item[ ] \textbf{${\rm V^0}$ and cascade reconstruction/analysis}
3194 The ${\rm V^0}$ candidates
3195 are reconstructed during the combined barrel tracking and stored in
3196 the ESD object. The following criteria are used for the selection:
3197 minimal-allowed impact parameter (in the transverse plane) for each
3198 track; maximal-allowed DCA between the two tracks; maximal-allowed
3200 ${\rm V^0}$ pointing angle
3201 (angle between the momentum vector of the particle combination
3202 and the line connecting the production and decay vertexes); minimal
3203 and maximal radius of the fiducial volume; maximal-allowed ${\rm
3205 last criterion requires the covariance matrix of track parameters,
3206 which is available only in \texttt{AliESDtrack}. The reconstruction
3207 is performed by \texttt{AliV0vertexer}. This class can be used also
3208 in the analysis. An example of reconstructed kaons taken directly
3209 from the ESDs is shown in Fig.\ref{CH6Fig:kaon}.
3213 \includegraphics*[width=120mm]{picts/kaon}
3214 \caption{Mass spectrum of the ${\rm K_S^0}$ meson candidates produced
3215 inclusively in the \mbox{Pb--Pb} collisions.}
3219 The cascade hyperons are reconstructed using the ${\rm V^0}$ candidate and
3220 `bachelor' track selected according to the cuts above. In addition,
3221 one requires that the reconstructed ${\rm V^0}$ effective mass belongs to
3222 a certain interval centered in the true value. The reconstruction
3223 is performed by \texttt{AliCascadeVertexer}, and this class can be
3224 used in the analysis.
3226 \item[ ] \textbf{Open charm}
3228 This is the second elaborated example of ESD
3229 analysis. There are two classes, \texttt{AliD0toKpi} and
3230 \texttt{AliD0toKpiAnalysis}, which contain the corresponding analysis
3231 code. The decay under investigation is ${\rm D^0 \to K^- \pi^+}$ and its
3232 charge conjugate. Each ${\rm D^0}$ candidate is formed by a positive and
3233 a negative track, selected to fulfill the following requirements:
3234 minimal-allowed track transverse momentum, minimal-allowed track
3235 impact parameter in the transverse plane with respect to the primary
3236 vertex. The selection criteria for each combination include
3237 maximal-allowed distance of closest approach between the two tracks,
3238 decay angle of the kaon in the ${\rm D^0}$ rest frame in a given region,
3239 product of the impact parameters of the two tracks larger than a given value,
3240 pointing angle between the ${\rm D^0}$ momentum and flight-line smaller than
3241 a given value. The particle
3242 identification probabilities are used to reject the wrong
3243 combinations, namely ${\rm (K,K)}$ and ${\rm (\pi,\pi)}$, and to enhance the
3244 signal-to-background ratio at low momentum by requiring the kaon
3245 identification. All proton-tagged tracks are excluded before the
3246 analysis loop on track pairs. More details can be found in
3247 Ref.\cite{CH6Ref:Dainese}.
3249 \item[ ] \textbf{Quarkonia analysis}
3251 Muon tracks stored in the ESD can be analyzed for example by the macro
3252 \texttt{MUONmassPlot\_ESD.C}.
3253 This macro performs an invariant-mass analysis of muon unlike-sign pairs
3254 and calculates the combinatorial background.
3255 Quarkonia \pt and rapidity distribution are built for \Jpsi and \Ups.
3256 This macro also performs a fast single-muon analysis: \pt,
3258 ${\rm \theta}$ vs ${\rm \varphi}$ acceptance distributions for positive
3260 tracks with a maximal-allowed ${\rm \chi^2}$.
3265 \item\textbf{AOD analysis}
3267 Often only a small subset of information contained in the ESD
3268 is needed to perform an analysis. This information
3269 can be extracted and stored in the AOD format in order to reduce
3270 the computing resources needed for the analysis.
3272 The AOD analysis framework implements a set of tools like data readers,
3273 converters, cuts, and other utility classes.
3274 The design is based on two main requirements: flexibility and common
3275 AOD particle interface. This guarantees that several analyses can be
3276 done in sequence within the same computing session.
3278 In order to fulfill the first requirement, the analysis is driven by the
3279 `analysis manager' class and particular analyses are added to it.
3280 It performs the loop over events, which are delivered by an
3281 user-specified reader. This design allows the analyses to be ordered
3282 appropriately if some of them depend on the results of the others.
3284 The cuts are designed to provide high flexibility
3285 and performance. A two-level architecture has been adopted
3286 for all the cuts (particle, pair and event). A class representing a cut
3287 has a list of `base cuts'. Each base cut implements a cut on a
3288 single property or performs a logical operation (and, or) on the result of
3291 A class representing a pair of particles buffers all the results,
3292 so they can be re-used if required.
3296 \item[ ] \textbf{Particle momentum correlations (HBT) -- HBTAN module}
3298 Particle momentum correlation analysis is based on the event-mixing technique.
3299 It allows one to extract the signal by dividing the appropriate
3300 particle spectra coming from the original events by those from the
3303 Two analysis objects are currently implemented to perform the mixing:
3304 the standard one and the one implementing the Stavinsky
3305 algorithm\cite{CH6Ref:Stavinsky}. Others can easily be added if needed.
3307 An extensive hierarchy of the function base classes has been implemented
3308 facilitating the creation of new functions.
3309 A wide set of the correlation, distribution and monitoring
3310 functions is already available in the module. See Ref.\cite{CH6Ref:HBTAN}
3313 The package contains two implementations of weighting algorithms, used
3314 for correlation simulations (the first developed by Lednicky
3315 \cite{CH6Ref:Weights} and the second due to CRAB \cite{CH6Ref:CRAB}), both
3316 based on an uniform interface.
3318 \item[ ] \textbf{Jet analysis}
3320 The jet analysis\cite{CH6Ref:Loizides} is available in the module JETAN. It has a set of
3321 readers of the form \texttt{AliJetParticlesReader<XXX>}, where \texttt{XXX}
3323 \texttt{HLT}, \texttt{KineGoodTPC}, \texttt{Kine}, derived from the base class
3324 \texttt{AliJetParticlesReader}. These
3325 provide an uniform interface to
3326 the information from the
3327 kinematics tree, from HLT, and from the ESD. The first step in the
3328 analysis is the creation of an AOD object: a tree containing objects of
3329 type \texttt{AliJetEventParticles}. The particles are selected using a
3330 cut on the minimal-allowed transverse momentum. The second analysis
3331 step consists of jet finding. Several algorithms are available in the
3332 classes of the type \texttt{Ali<XXX>JetFinder}.
3333 An example of AOD creation is provided in
3334 the \texttt{createEvents.C} macro. The usage of jet finders is illustrated in
3335 \texttt{findJets.C} macro.
3338 \item[ ] \textbf{${\rm V^0}$ AODs}
3340 The AODs for ${\rm V^0}$ analysis contain several additional parameters,
3341 calculated and stored for fast access. The methods of the class {\tt
3342 AliAODv0} provide access to all the geometrical and kinematics
3343 parameters of a ${\rm V^0}$ candidate, and to the ESD information used
3344 for the calculations.
3347 \item[ ] \textbf{MUON}
3349 There is also a prototype MUON analysis provided in
3350 \texttt{AliMuonAnalysis}. It simply fills several histograms, namely
3351 the transverse momentum and rapidity for positive and negative muons,
3352 the invariant mass of the muon pair, etc.
3357 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
3361 \section{Analysis Foundation Library}
3363 The result of the reconstruction chain is the Event Summary Data (ESD)
3364 object. It contains all the information that may
3365 be useful in {\it any} analysis. In most cases only a small subset
3366 of this information is needed for a given analysis.
3367 Hence, it is essential to provide a framework for analyses, where
3368 user can extract only the information required and store it in
3369 the Analysis Object Data (AOD) format. This is to be used in all his
3370 further analyses. The proper data preselecting allows to speed up
3371 the computation time significantly. Moreover, the interface of the ESD classes is
3372 designed to fulfill the requirements of the reconstruction
3373 code. It is inconvenient for most of analysis algorithms,
3374 in contrary to the AOD one. Additionally, the latter one can be customized
3375 to the needs of particular analysis, if it is only required.
3377 We have developed the analysis foundation library that
3378 provides a skeleton framework for analyses, defines AOD data format
3379 and implements a wide set of basic utility classes which facilitate
3380 the creation of individual analyses.
3381 It contains classes that define the following entities:
3384 \item AOD event format
3388 \item Analysis manager class
3389 \item Base class for analyses
3395 \item Other utility classes
3398 It is designed to fulfill two main requirements:
3401 \item \textbf{Allows for flexibility in designing individual analyses}
3402 Each analysis has its most performing solutions. The most trivial example is
3403 the internal representation of a particle momentum: in some cases the Cartesian coordinate system is preferable and in other cases - the cylindrical one.
3404 \item \textbf{All analyses use the same AOD particle interface to access the data }
3405 This guarantees that analyses can be chained. It is important when
3406 one analysis depends on the result of the other one, so the latter one can
3407 process exactly the same data without the necessity of any conversion.
3408 It also lets to carry out many analyses in the same job and consequently, the
3409 computation time connected with
3410 the data reading, job submission, etc. can be significantly reduced.
3413 The design of the framework is described in detail below.
3416 % -----------------------------------------------------------------------------
3420 The \texttt{AliAOD} class contains only the information required
3421 for an analysis. It is not only the data format as they are
3422 stored in files, but it is also used internally throughout the package
3423 as a particle container.
3424 Currently it contains a \texttt{TClonesArray} of particles and
3425 data members describing the global event properties.
3426 This class is expected to evolve further as new analyses continue to be
3427 developed and their requirements are implemented.
3429 % -----------------------------------------------------------------------------
3431 \subsection{Particle}
3433 \texttt{AliVAODParticle} is a pure virtual class that defines a particle
3435 Each analysis is allowed to create its own particle class
3436 if none of the already existing ones meet its requirements.
3437 Of course, it must derive from \texttt{AliVAODParticle}.
3438 However, all analyses are obliged to
3439 use the interface defined in \texttt{AliVAODParticle} exclusively.
3440 If additional functionality is required, an appropriate
3441 method is also added to the virtual interface (as a pure virtual or an empty one).
3442 Hence, all other analyses can be ran on any AOD, although the processing time
3443 might be longer in some cases (if the internal representation is not
3446 We have implemented the standard concrete particle class
3447 called \texttt{AliAODParticle}. The momentum is stored in the
3448 Cartesian coordinates and it also has the data members
3449 describing the production vertex. All the PID information
3450 is stored in two dynamic arrays. The first array contains
3451 probabilities sorted in descending order,
3452 and the second one - corresponding PDG codes (Particle Data Group).
3453 The PID of a particle is defined by the data member which is
3454 the index in the arrays. This solution allows for faster information
3455 access during analysis and minimizes memory and disk space consumption.
3458 % -----------------------------------------------------------------------------
3462 The pair object points to two particles and implements
3463 a set of methods for the calculation of the pair properties.
3464 It buffers calculated values and intermediate
3465 results for performance reasons. This solution applies to
3466 quantities whose computation is time consuming and
3467 also to quantities with a high reuse probability. A
3468 Boolean flag is used to mark the variables already calculated.
3469 To ensure that this mechanism works properly,
3470 the pair always uses its own methods internally,
3471 instead of accessing its variables directly.
3473 The pair object has pointer to another pair with the swapped
3474 particles. The existence of this feature is connected to
3475 the implementation of the mixing algorithm in the correlation
3476 analysis package: if particle A is combined with B,
3477 the pair with the swapped particles is not mixed.
3478 In non-identical particle analysis their order is important, and
3479 a pair cut may reject a pair while a reversed one would be
3480 accepted. Hence, in the analysis the swapped pair is also tried
3481 if a regular one is rejected. In this way the buffering feature is
3482 automatically used also for the swapped pair.
3484 % -----------------------------------------------------------------------------
3486 \subsection{Analysis manager class and base class}
3488 The {\it analysis manager} class (\texttt{AliRunAnalysis}) drives all
3489 the process. A particular analysis, which must inherit from
3490 \texttt{AliAnalysis} class, is added to it.
3491 The user triggers analysis by calling the \texttt{Process} method.
3492 The manager performs a loop over events, which are delivered by
3493 a reader (derivative of the \texttt{AliReader} class, see section
3494 \ref{cap:soft:secReaders}).
3495 This design allows to chain the analyses in the proper order if any
3496 depends on the results of the other one.
3498 The user can set an event cut in the manager class.
3499 If an event is not rejected, the \texttt{ProcessEvent}
3500 method is executed for each analysis object.
3501 This method requires two parameters, namely pointers to
3502 a reconstructed and a simulated event.
3504 The events have a parallel structure, i.e. the corresponding
3505 reconstructed particles and simulated particles have always the same index.
3506 This allows for easy implementation of an analysis where both
3507 are required, e.g. when constructing residual distributions.
3508 It is also very important in correlation simulations
3509 that use the weight algorithm\cite{CH6Ref:Weights}.
3510 By default, the pointer to the simulated event is null,
3511 i.e. like it is in the experimental data processing.
3513 An event cut and a pair cut can be set in \texttt{AliAnalysis}.
3514 The latter one points two particle cuts, so
3515 an additional particle cut data member is redundant
3516 because the user can set it in this pair cut.
3518 \texttt{AliAnalysis} class has the feature that allows to choose
3519 which data the cuts check:
3521 \item the reconstructed (default)
3526 It has four pointers to the method (data members):
3528 \item \texttt{fkPass1} -- checks a particle, the cut is defined by the
3529 cut on the first particle in the pair cut data member
3530 \item \texttt{fkPass2} -- as above, but the cut on the second particle is used
3531 \item \texttt{fkPass} -- checks a pair
3532 \item \texttt{fkPassPairProp} -- checks a pair, but only two particle properties
3535 Each of them has two parameters, namely pointers to
3536 reconstructed and simulated particles or pairs.
3537 The user switches the behavior with the
3538 method that sets the above pointers to the appropriate methods.
3539 We have decided to implement
3540 this solution because it performs faster than the simpler one that uses
3541 boolean flags and "if" statements. These cuts are used mostly inside
3542 multiply nested loops, and even a small performance gain transforms
3543 into a noticeable reduction of the overall computation time.
3544 In the case of an event cut, the simpler solution was applied.
3545 The \texttt{Rejected} method is always used to check events.
3546 A developer of the analysis code must always use this method and
3547 the pointers to methods itemized above to benefit from this feature.
3549 % -----------------------------------------------------------------------------
3551 \subsection{Readers}
3552 \label{cap:soft:secReaders}
3554 A Reader is the object that provides data far an analysis.
3555 \texttt{AliReader} is the base class that defines a pure virtual
3558 A reader may stream the reconstructed and/or the
3559 simulated data. Each of them is stored in a separate AOD.
3560 If it reads both, a corresponding reconstructed and
3561 simulated particle have always the same index.
3563 Most important methods for the user are the following:
3565 \item \texttt{Next} -- It triggers reading of a next event. It returns
3566 0 in case of success and 1 if no more events
3568 \item \texttt{Rewind} -- Rewinds reading to the beginning
3569 \item \texttt{GetEventRec} and \texttt{GetEventSim} -- They return
3570 pointers to the reconstructed and the simulated events respectively.
3573 The base reader class implements functionality for
3574 particle filtering at a reading level. A user can set any
3575 number of particle cuts in a reader and the particle is
3576 read if it fulfills the criteria defined by any of them.
3577 Particularly, a particle type is never certain and the readers
3578 are constructed in the way that all the PID hypotheses (with non-zero
3579 probability) are verified.
3580 In principle, a track can be read with more than one mass
3582 For example, consider a track
3583 which in 55\% is a pion and in 40\% a kaon, and a user wants to read
3584 all the pions and kaons with the PID probabilities higher then
3585 50\% and 30\%, respectively. In such cases two particles
3586 with different PIDs are added to AOD.
3587 However, both particle have the same Unique Identification
3588 number (UID) so it can be easily checked that in fact they are
3591 % Multiple File Sources
3592 \texttt{AliReader} implements the feature that allows to specify and manipulate
3593 multiple data sources, which are read sequentially.
3594 The user can provide a list of directory names where the data are searched.
3595 The \texttt{ReadEventsFromTo} method allows to limit the range of events that are read
3596 (e.g. when only one event of hundreds stored in an AOD is of interest).
3598 \texttt{AliReader} has the switch that enables event buffering,
3599 so an event is not deleted and can be quickly accessed if requested again.
3602 Particles within an event are frequently sorted in some way, e.g.
3603 the particle trajectory reconstruction provides tracks sorted according
3604 to their transverse momentum. This leads to asymmetric
3605 distributions where they are not expected. The user can request the
3606 reader to randomize the particle order with \texttt{SetBlend} method.
3609 The AOD objects can be written to disk with the \texttt{AliReaderAOD}
3610 using the static method \texttt{WriteAOD}. As the first
3611 parameter user must pass the pointer to another reader that
3612 provides AOD objects. Typically it is \texttt{AliReaderESD},
3613 but it also can be other one, f.g. another \texttt{AliReaderAOD}
3614 (to filter out the desired particles from the already existing AODs).
3616 Inside the file, the AODs are stored in a \texttt{TTree}.
3617 Since the AOD stores particles in the clones array, and many particles
3618 formats are allowed, the reading and writing is not straight forward.
3619 The user must specify what is the particle format to be stored on disk,
3620 because in a general case the input reader can stream AODs with not consistent
3621 particle formats. Hence, the careful check must be done, because storing
3622 an object of the different type then it was specified in the tree leads
3623 to the inevitable crash. If the input AOD has the different particle type then
3624 expected it is automatically converted. Hence, this method can be also used
3625 for the AOD type conversion.
3627 % -----------------------------------------------------------------------------
3629 \subsection{AODs buffer}
3631 Normally the readers do not buffer the events.
3632 Frequently an event is needed to be kept for further analysis,
3633 f.g. if uncorrelated combinatorial background is computed.
3634 We have implemented the FIFO (First In First Out) type buffer called
3635 \texttt{AliEventBuffer} that caches the defined number of events.
3637 % -----------------------------------------------------------------------------
3641 The cuts are designed to guarantee the highest flexibility
3642 and performance. We have implemented the same two level architecture
3643 for all the cuts (particle, pair and event).
3644 Cut object defines the ranges of many properties that a particle, a pair or
3645 an event may posses and it also defines a method, which performs the
3646 necessary check. However, usually a user wants to limit
3647 ranges of only a few properties. For speed and robustness reasons,
3648 the design presented in Fig.\ref{cap:soft:partcut} was developed.
3650 The cut object has an array of pointers to
3651 base cuts. The number of entries in the array depends
3652 on the number of the properties the user wants to limit.
3653 The base cut implements checks on a single property.
3654 It implements maximum and minimum values and a virtual method \texttt{Rejected}
3655 that performs a range check of the value returned by pure
3656 virtual method \texttt{GetValue}. Implementation of a concrete
3657 base cut is very easy in most cases: it is enough to
3658 implement \texttt{GetValue} method. The ANALYSIS package
3659 already implements a wide range of base cuts,
3660 and the cut classes have a comfortable interface for
3661 setting all of them. For example it is enough to invoke
3662 the \texttt{SetPtRange(min,max)} method and behind the scenes
3663 a proper base cut is created and configured.
3665 The base cuts performing a logical operation (and,or) on the result of two
3666 other base cuts are also implemented. This way the user can configure basically any
3667 cut in a macro. Supplementary user defined base cuts can be added in the user
3669 In case the user prefers to implement a complicated cut in a single method (class)
3670 he can create his base cut performing all the operations.
3672 The pair cut in addition to an array of pointers to the base pair
3673 cuts it has two pointers to particle cut, one for each particle in
3678 \includegraphics[width=0.4\columnwidth, origin=c]{picts/partcuts}
3681 {Cut classes diagram on the example of the particle cut.
3682 \label{cap:soft:partcut}}
3686 \subsection{Other classes}
3688 We have developed a few classes that are used in correlation analyses,
3689 but they can be also useful in the others. The first is the TPC cluster map,
3690 which is the bitmap vector describing at which pad-rows a track has a cluster.
3691 It is used by the anti-splitting algorithm in the particle correlation
3694 Another example is the \class{AliTrackPoints} class, that stores
3695 track space coordinates at requested distances from the center of
3696 the detector. It is used in the particle correlation analysis
3697 by the anti-merging cut.
3698 The coordinates are calculated assuming the helix shape
3699 of a track. Different options that define the way they are computed
3704 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
3708 \section{Data input, output and exchange subsystem of AliRoot}
3710 This section is taken from\cite{PiotrPhD}.
3712 A few tens of different data types is present within AliRoot because
3713 hits, summable digits, digits and clusters are characteristic for each
3714 sub-detector. Writing all of the event data to a single file was
3715 causing number of limitations.
3716 Moreover, the reconstruction chain introduces rather complicated dependences
3717 between different components of the framework, what is highly
3718 undesirable from the point of view of software design.
3719 In order to solve both problems, we have designed a set of classes that
3720 manage data manipulation i.e. storage, retrieval and exchange within
3723 It was decided to use the ``white board'' concept, which is a single
3724 exchange object where were all data are stored and publicly accessible.
3725 For that purpose I have employed \textbf{TFolder} facility of ROOT.
3726 This solution solves the problem of inter-module dependencies.
3728 There are two most frequently occurring use-cases concerning the way a user deals with the data within the framework:
3730 \item data production -- produce - \textbf{write} - \textbf{unload} (clean)
3731 \item data processing -- \textbf{load} (retrieve) - process - \textbf{unload}
3734 \textbf{Loader}s are the utility classes that encapsulate and
3735 automatize the tasks written in bold font.
3736 They limit the user's interaction with the I/O routines to the
3737 necessary minimum, providing friendly and very easy interface,
3738 which for the use-cases considered above, consists of only 3 methods:
3740 \item \texttt{Load} -- retrieves the requested data to the appropriate place in the
3741 white board (folder)
3742 \item \texttt{Unload} -- cleans the data
3743 \item \texttt{Write} -- writes the data
3746 Such an insulation layer has number of advantages:
3748 \item makes the data access easier for the user.
3749 \item avoids the code duplication in the framework.
3750 \item minimize the risk of a bug occurrence resulting from the improper I/O management.
3751 The ROOT object oriented data storage extremely simplifies the user interface,
3752 however, there are a few pitfalls that are frequently unknown to an
3756 To make the description more clear we need to introduce briefly
3757 basic concepts and the way the AliRoot program operates.
3758 The basic entity is an event, i.e. all the data recorded by the
3759 detector in a certain time interval plus all the reconstructed information
3760 from these data. Ideally the data are produced by the single collision
3761 selected by a trigger for recording. However, it may happen that the data
3762 from the previous or proceeding events are present because the bunch
3763 crossing rate is higher then the maximum detector frequency (pile-up),
3764 or simply more than one collision occurred within one bunch crossing.
3766 Information describing the event and the detector state is also
3767 stored, like bunch crossing number, magnetic field, configuration, alignment, etc..,
3768 In the case of a Monte-Carlo simulated data, information concerning the
3769 generator, simulation parameters is also kept. Altogether this data
3770 is called the \textbf{header}.
3772 For the collisions that produce only a few tracks (best example
3773 are the pp collisions) it may happen that total overhead
3774 (the size of the header and the ROOT structures supporting object oriented
3775 data storage) is non-negligible in comparison with the data itself.
3776 To avoid such situations, the possibility of storing an arbitrary number
3777 of events together within a \textbf{run} is required. Hence, the common data can be
3778 written only once per run and several events can be written to a single file.
3780 It was decided that the data related to different detectors
3781 and different processing phases should be stored in different files.
3782 In such a case only the required data need to be downloaded for an analysis.
3783 It also allows to alter the files easily if required,
3784 for example when a new version of the reconstruction or simulation is needed
3785 to be run for a given detector. Hence, only new files are updated
3786 and all the rest may stay untouched. It is especially important because
3787 it is difficult to erase files in mass storage systems.
3788 This also gives the possibility for an easy comparison of the data produced with
3789 competing algorithms.
3791 All the header data, configuration and management objects
3792 are stored in a separate file, which is usually named galice.root
3793 (for simplicity we will further refer to it as galice).
3795 % -----------------------------------------------------------------------------
3797 \subsection{The ``White Board''}
3799 The folder structure is presented in Fig.\ref{cap:soft:folderstruct}.
3800 It is subdivided into two parts:
3802 \item \textbf{event data} that have the scope of single event
3803 \item \textbf{static data} that do not change from event to event,
3804 i.e. geometry and alignment, calibration, etc.
3807 During startup of AliRoot the skeleton structure of the ALICE white
3808 board is created. The \texttt{AliConfig} class (singleton) provides all the
3809 functionality that is needed to construct the folder structures.
3811 An event data are stored under a single sub-folder (event folder) named as
3812 specified by the user when opening a session (run). Many sessions can be
3813 opened at the same time, providing that each of them has an unique event
3814 folder name, so they can be distinguished by this name.
3815 This functionality is crucial for superimposing events
3816 on the level of the summable digits, i.e. analog detector response without the noise
3817 contribution (the event merging). It is also useful when two events
3818 or the same event simulated or reconstructed with a competing algorithm,
3819 need to be compared.
3823 \includegraphics[width=0.8\columnwidth, origin=c]{picts/folderstruct}
3826 {The folders structure. An example event is mounted under ``Event'' folder.
3827 \label{cap:soft:folderstruct}}
3830 % -----------------------------------------------------------------------------
3832 \subsection {Loaders}
3834 Loaders can be represented as a four layer, tree like structure
3835 (see Fig.\ref{cap:soft:loaderdiagram}). It represents the logical structure of
3836 the detector and the data association.
3840 \includegraphics[width=1.0\columnwidth, origin=c]{picts/loaderdiagram}
3843 {Loaders diagram. Dashed lines separate layers serviced by the different types of
3844 the loaders (from top): AliRunLoder, AliLoader, AliDataLoader, AliBaseLoader.
3845 \label{cap:soft:loaderdiagram}}
3851 \item \texttt{AliBaseLoader} -- One base loader is responsible for posting
3852 (finding in a file and publishing in a folder) and writing
3853 (finding in a folder and putting in a file) of a single object.
3854 AliBaseLoader is a pure virtual class because writing and
3855 posting depend on the type of an object. the following concrete classes are currently implemented:
3857 \item \texttt{AliObjectLoader} -- It handles \texttt{TObject}, i.e. basically any object
3858 within ROOT and AliRoot since an object must inherit from
3859 this class to be posted to the white board
3860 (added to \texttt{TFolder}).
3862 \item \texttt{AliTreeLoader} -- It is the base loader for \texttt{TTrees},
3863 which requires special
3864 handling, because they must be always properly
3865 associated with a file.
3867 \item \texttt{AliTaskLoader} -- It handles \texttt{TTask}, which need to be posted to the
3868 appropriate parental \texttt{TTask} instead of \texttt{TFolder}.
3870 \texttt{AliBaseLoader} stores the name of the object it manages in
3871 its base class \class{TNamed} to be able
3872 to find it in a file or folder. The user normally does not need to use
3873 these classes directly and they are rather utility classes employed by
3874 \texttt{AliDataLoader}.
3876 \item \texttt{AliDataLoader} -- It manages a single data type, for example digits for
3877 a detector or kinematics tree.
3878 Since a few objects are normally associated with a given
3879 data type (data itself, quality assurance data (QA),
3880 a task that produces the data, QA task, etc.)
3881 \texttt{AliDataLoader} has an array of \texttt{AliBaseLoaders},
3882 so each of them is responsible for each object.
3883 Hence, \texttt{AliDataLoader} can be configured individually to
3884 meet specific requirements of a certain data type.
3886 A single file contains the data corresponding to a single processing
3887 phase and solely of one detector.
3888 By default the file is named according to the schema
3889 {\it Detector Name + Data Name + .root} but it can be
3890 changed in run-time if needed so the data can be stored in or retrieved
3891 from an alternative source. When needed,
3892 the user can limit the number of events stored in a single file.
3893 If the maximum number is exceeded, a file is closed
3894 and a new one is opened with the consecutive number added
3895 to its name before {\it .root} suffix. Of course,
3896 during the reading process, files are also automatically
3897 interchanged behind the scenes and it is invisible to the user.
3899 The \texttt{AliDataLoader} class performs all the tasks related
3900 to file management e.g. opening, closing,
3901 ROOT directories management, etc.
3902 Hence, for each data type the average file size can be
3903 tuned. It is important because it is undesirable to store small
3904 files on the mass storage systems and on the other hand, all file
3905 systems have a maximum file size allowed.
3908 \item \texttt{AliLoader} -- It manages all the data associated with a
3909 single detector (hits, digits, summable digits, reconstructed points, etc.).
3910 It has an array of \texttt{AliDataLoaders} and each of them manages
3913 The \texttt{AliLoader} object is created by a class representing
3914 a detector (inheriting from \texttt{AliDetector}).
3915 Its functionality can be extended and customized to the needs of a
3916 particular detector by creating a specialized class that derives
3917 from \texttt{AliLoader}, as it was done, for instance, for ITS or PHOS.
3918 The default configuration can be
3919 easily modified either in \texttt{AliDetector::MakeLoader}
3920 or by overriding the method \texttt{AliLoader::InitDefaults}.
3923 \item \texttt{AliRunLoader} -- It is a main handle for data access and manipulation in
3924 AliRoot. There is only one such an object in each run.
3925 It is always named {\it RunLoader} and stored
3926 on the top (ROOT) directory of a galice file.
3928 It keeps an array of \texttt{AliLoader}'s, one for each detector.
3929 It also manages the event data that are not associated with any detector
3930 i.e. Kinematics and Header and it utilizes \texttt{AliDataLoader}'s
3933 The user opens a session using a static method \texttt{AliRunLoader::Open}.
3934 This method has three parameters: the file name, event folder name and mode.
3935 The mode can be "new" and in this case a file and a run loader are created from scratch.
3936 Otherwise, a file is opened and a run loader is searched in.
3937 If successful, the event folder with a provided name
3938 (if such does not exist yet) is created and the structure
3939 presented in Fig.\ref{cap:soft:folderstruct} is created within the folder.
3941 put in the event folder, so the user can always find it there
3942 and use it for data management.
3944 \texttt{AliRunLoader} provides a simple method \texttt{GetEvent(n)}
3945 to loop over events within a run. Calling it causes that all
3946 currently loaded data are cleaned and the data for
3947 the newly requested event are automatically posted.
3949 In order to facilitate the way the user interacts with the loaders,
3950 \texttt{AliRunLoader} provides the wide set of shortcut methods.
3951 For example, if digits are required to be loaded, the user can call
3952 \texttt{AliRunLoader::LoadDigits("ITS TPC")}, instead of finding the appropriate
3953 \texttt{AliDataLoader}'s responsible for digits for ITS and TPC,
3954 and then request to load the data for each of them.
3961 \section{Calibration and alignment}
3964 \subsection{Calibration framework}
3967 The calibration framework is based on the following principles:
3971 \item the calibration and alignment database contains ROOT TObjects stored
3974 \item calibration and alignment objects are RUN DEPENDENT objects;
3976 \item the database is READ-ONLY (automatic versioning of the stored
3979 \item three different data stores structures are (currently) available:
3981 \item a GRID folder containing Root files, each one containing one
3982 single Root object. The Root files are created inside a directory tree
3983 defined by the object's name and run validity range;
3985 \item a LOCAL folder containing Root files, each one containing one
3986 single Root object, with a structure similar to the Grid one;