Adding documentation: AliRoot primer
[u/mrichter/AliRoot.git] / doc / aliroot-primer / primer.tex
c4593ee3 1\documentclass[12pt,a4paper,twoside]{article}
14% ---------------------------------------------------------------
15% define new commands/symbols
16% ---------------------------------------------------------------
18% General stuff
29\newcommand {\pT} {\mbox{$p_{\rm t}$}}
31\newcommand {\grid} {Grid\@\xspace}
32\newcommand {\MC} {Monte~Carlo\@\xspace}
33\newcommand {\alien} {AliEn\@\xspace}
34\newcommand {\pp} {\mbox{p--p}\@\xspace}
35\newcommand {\pA} {\mbox{p--A}\@\xspace}
36\newcommand {\PbPb} {\mbox{Pb--Pb}\@\xspace}
37\newcommand {\aliroot} {AliRoot\@\xspace}
38\newcommand {\ROOT} {ROOT\@\xspace}
39\newcommand {\OO} {Object-Oriented\@\xspace}
46\newcommand{\Jpsi} {\mbox{J\kern-0.05em /\kern-0.05em$\psi$}\xspace}
47\newcommand{\psip} {\mbox{$\psi^\prime$}\xspace}
48\newcommand{\Ups} {\mbox{$\Upsilon$}\xspace}
49\newcommand{\Upsp} {\mbox{$\Upsilon^\prime$}\xspace}
50\newcommand{\Upspp} {\mbox{$\Upsilon^{\prime\prime}$}\xspace}
51\newcommand{\qqbar} {\mbox{$q\bar{q}$}\xspace}
53\newcommand {\grad} {\mbox{$^{\circ}$}}
55\newcommand {\rap} {\mbox{$\left | y \right | $}}
56\newcommand {\mass} {\mbox{\rm GeV$\kern-0.15em /\kern-0.12em c^2$}}
57\newcommand {\tev} {\mbox{${\rm TeV}$}}
58\newcommand {\gev} {\mbox{${\rm GeV}$}}
59\newcommand {\mev} {\mbox{${\rm MeV}$}}
60\newcommand {\kev} {\mbox{${\rm keV}$}}
61\newcommand {\mom} {\mbox{\rm GeV$\kern-0.15em /\kern-0.12em c$}}
62\newcommand {\mum} {\mbox{$\mu {\rm m}$}}
63\newcommand {\gmom} {\mbox{\rm GeV$\kern-0.15em /\kern-0.12em c$}}
64\newcommand {\mmass} {\mbox{\rm MeV$\kern-0.15em /\kern-0.12em c^2$}}
65\newcommand {\mmom} {\mbox{\rm MeV$\kern-0.15em /\kern-0.12em c$}}
66\newcommand {\nb} {\mbox{\rm nb}}
67\newcommand {\musec} {\mbox{$\mu {\rm s}$}}
68\newcommand {\cmq} {\mbox{${\rm cm}^{2}$}}
69\newcommand {\cm} {\mbox{${\rm cm}$}}
70\newcommand {\mm} {\mbox{${\rm mm}$}}
71\newcommand {\dens} {\mbox{${\rm g}\,{\rm cm}^{-3}$}}
73\lstset{ % general command to set parameter(s)
74% basicstyle=\small, % print whole listing small
75 basicstyle=\ttfamily, % print whole listing monospace
76 keywordstyle=\bfseries, % bold black keywords
77 identifierstyle=, % identifiers in italic
78 commentstyle=\itshape, % white comments in italic
79 stringstyle=\ttfamily, % typewriter type for strings
80 showstringspaces=false, % no special string spaces
81 columns=fullflexible, % Flexible columns
82 xleftmargin=2em, % Extra margin, left
83 xrightmargin=2em, % Extra margin, right
84 numbers=left, % Line numbers on the left
85 numberfirstline=true, % First line numbered
86 firstnumber=1, % Always start at 1
87 stepnumber=5, % Every fifth line
88 numberstyle=\footnotesize\itshape, % Style of line numbers
89 frame=lines} % Lines above and below listings
92% ---------------------------------------------------------
93% - End of Definitions
94% ---------------------------------------------------------
98\title{AliRoot Primer}
99\author{Editor P.Hristov}
100\date{Version v4-05-06 \\
112% -----------------------------------------------------------------------------
115\subsection{About this primer}
117The aim of this primer is to give some basic information about the
118ALICE offline framework (AliRoot) from users perspective. We explain
119in detail the installation procedure, and give examples of some
120typical use cases: detector description, event generation, particle
121transport, generation of ``summable digits'', event merging,
122reconstruction, particle identification, and generation of event
123summary data. The primer also includes some examples of analysis, and
124short description of the existing analysis classes in AliRoot. An
125updated version of the document can be downloaded from
128For the reader interested by the AliRoot architecture and by the
129performance studies done so far, a good starting point is Chapter 4 of
130the ALICE Physics Performance Report\cite{PPR}. Another important
131document is the ALICE Computing Technical Design Report\cite{CompTDR}.
132Some information contained there has been included in the present
133document, but most of the details have been omitted.
135AliRoot uses the ROOT\cite{ROOT} system as a foundation on which the
136framework for simulation, reconstruction and analysis is built. The
137transport of the particles through the detector is carried on by the
138Geant3\cite{Geant3} or FLUKA\cite{FLUKA} packages. Support for
139Geant4\cite{Geant4} transport package is coming soon.
141Except for large existing libraries, such as Pythia6\cite{MC:PYTH} and
142HIJING\cite{MC:HIJING}, and some remaining legacy code, this framework
143is based on the Object Oriented programming paradigm, and it is
144written in C++.
146The following packages are needed to install the fully operational
147software distribution:
149\item ROOT, available from \url{}
150or using the ROOT CVS repository
154\item AliRoot from the ALICE offline CVS repository
158\item transport packages:
160\item GEANT~3 is available from the ROOT CVS repository
161\item FLUKA library can
162be obtained after registration from \url{}
163\item GEANT~4 distribution from \url{}.
167The access to the GRID resources and data is provided by the
168AliEn\cite{AliEn} system.
170The installation details are explained in Section \ref{Installation}.
172\subsection{AliRoot framework}\label{AliRootFramework}
174In HEP, a framework is a set of software tools that enables data
175processing. For example the old CERN Program Library was a toolkit to
176build a framework. PAW was the first example of integration of tools
177into a coherent ensemble specifically dedicated to data analysis. The
178role of the framework is shown in Fig.~\ref{MC:Parab}.
181 \centering
182 \includegraphics[width=10cm]{picts/Parab}
183 \caption{Data processing framework.} \label{MC:Parab}
186The primary interactions are simulated via event generators, and the
187resulting kinematic tree is then used in the transport package. An
188event generator produces set of ``particles'' with their momenta. The
189set of particles, where one maintains the production history (in form
190of mother-daughter relationship and production vertex) forms the
191kinematic tree. More details can be found in the ROOT documentation of
192class \class{TParticle}. The transport package transports the
193particles through the set of detectors, and produces \textbf{hits},
194which in ALICE terminology means energy deposition at a given
195point. The hits contain also information (``track labels'') about the
196particles that have generated them. In case of calorimeters (PHOS and
197EMCAL) the hit is the energy deposition in the whole active volume of
198a detecting element. In some detectors the energy of the hit is used
199only for comparison with a given threshold, for example in TOF and ITS
200pixel layers.
202At the next step the detector response is taken into account, and the
203hits are transformed into \textbf{digits}. As it was explained above,
204the hits are closely related to the tracks which generated them. The
205transition from hits/tracks to digits/detectors is marked on the
206picture as ``disintegrated response'', the tracks are
207``disintegrated'' and only the labels carry the \MC information.
208There are two types of digits: \textbf{summable digits}, where one
209uses low thresholds and the result is additive, and {\bf digits},
210where the real thresholds are used, and result is similar to what one
211would get in the real data taking. In some sense the {\bf summable
212digits} are precursors of the \textbf{digits}. The noise simulation is
213activated when \textbf{digits} are produced. There are two differences
214between the \textbf{digits} and the \textbf{raw} data format produced
215by the detector: firstly, the information about the \MC particle
216generating the digit is kept as data member of the class
217\class{AliDigit}, and secondly, the raw data are stored in binary
218format as ``payload'' in a ROOT structure, while the digits are stored
219in ROOT classes. Two conversion chains are provided in AliRoot:
220\textbf{hits} $\to$ \textbf{summable digits} $\to$ \textbf{digits},
221and \textbf{hits} $\to$ \textbf{digits}. The summable digits are used
222for the so called ``event merging'', where a signal event is embedded
223in a signal-free underlying event. This technique is widely used in
224heavy-ion physics and allows to reuse the underlying events with
225substantial economy of computing resources. Optionally it is possible
226to perform the conversion \textbf{digits} $\to$ \textbf{raw data},
227which is used to estimate the expected data size, to evaluate the high
228level trigger algorithms, and to carry on the so called computing data
229challenges. The reconstruction and the HLT algorithms can work both
230with \textbf{digits} or with \textbf{raw data}. There is also the
231possibility to convert the \textbf{raw data} between the following
232formats: the format coming form the front-end electronics (FEE)
233through the detector data link (DDL), the format used in the data
234acquisition system (DAQ), and the ``rootified'' format. More details
235are given in section \ref{Simulation}.
237After the creation of digits, the reconstruction and analysis chain
238can be activated to evaluate the software and the detector
239performance, and to study some particular signatures. The
240reconstruction takes as input digits or raw data, real or simulated.
241The user can intervene into the cycle provided by the framework to
242replace any part of it with his own code or implement his own analysis
243of the data. I/O and user interfaces are part of the framework, as are
244data visualization and analysis tools and all procedures that are
245considered of general enough interest to be introduced into the
246framework. The scope of the framework evolves with time as the needs
247and understanding of the physics community evolve.
249The basic principles that have guided the design of the AliRoot
250framework are re-usability and modularity. There are almost as many
251definitions of these concepts as there are programmers. However, for
252our purpose, we adopt an operative heuristic definition that expresses
253our objective to minimize the amount of unused or rewritten code and
254maximize the participation of the physicists in the development of the
257\textbf{Modularity} allows replacement of parts of our system with
258minimal or no impact on the rest. Not every part of our system is
259expected to be replaced. Therefore we are aiming at modularity
260targeted to those elements that we expect to change. For example, we
261require the ability to change the event generator or the transport \MC
262without affecting the user code. There are elements that we do not
263plan to interchange, but rather to evolve in collaboration with their
264authors such as the ROOT I/O subsystem or the ROOT User Interface
265(UI), and therefore no effort is made to make our framework modular
266with respect to these. Whenever an element has to be modular in the
267sense above, we define an abstract interface to it. The codes from the
268different detectors are independent so that different detector groups
269can work concurrently on the system while minimizing the
270interference. We understand and accept the risk that at some point the
271need may arise to make modular a component that was not designed to
272be. For these cases, we have elaborated a development strategy that
273can handle design changes in production code.
275\textbf{Re-usability} is the protection of the investment made by the
276programming physicists of ALICE. The code embodies a large scientific
277knowledge and experience and is thus a precious resource. We preserve
278this investment by designing a modular system in the sense above and
279by making sure that we maintain the maximum amount of backward
280compatibility while evolving our system. This naturally generates
281requirements on the underlying framework prompting developments such
282as the introduction of automatic schema evolution in ROOT.
284The \textbf{support} of the AliRoot framework is a collaborative effort
285within the ALICE experiment. Question, suggestions, topics for
286discussion and messages are exchanged in the mailing list
287\url{}. Bug reports and tasks are submitted on the
288Savannah page \url{}.
294\section{Installation and development tools}\label{Installation}
296% -----------------------------------------------------------------------------
298\subsection{Platforms and compilers}
300The main development and production platform is Linux on Intel 32 bits
301processors. The official Linux\cite{Linux} distribution at CERN is
302Scientific Linux SLC\cite{SLC}. The code works also on
303RedHat\cite{RedHat} version 7.3, 8.0, 9.0, Fedora Core\cite{Fedora} 1
304-- 5, and on many other Linux distributions. The main compiler on
305Linux is gcc\cite{gcc}: the recommended version is gcc 3.2.3 --
3063.4.6. The older releases (2.91.66, 2.95.2, 2.96) have problems in the
307FORTRAN optimization which has to be switched off for all the FORTRAN
308packages. AliRoot can be used with gcc 4.0.X where the FORTRAN
309compiler g77 is replaced by g95. The last release series of gcc (4.1)
310work with gfortran as well. As an option you can use Intel
311icc\cite{icc} compiler, which is supported as well. You can download
312it from \url{} and use it free of charge for
313non-commercial projects. Intel also provides free of charge the
314VTune\cite{VTune} profiling tool which is really one of the best
315available so far.
317AliRoot is supported on Intel 64 bit processors
318(Itanium\cite{Itanium}) running Linux. Both the gcc and Intel icc
319compilers can be used.
321On 64 bit AMD\cite{AMD} processors such as Opteron AliRoot runs
322successfully with the gcc compiler.
324The software is also regularly compiled and run on other Unix
325platforms. On Sun (SunOS 5.8) we recommend the CC compiler Sun
326WorkShop 6 update 1 C++ 5.2. The WorkShop integrates nice debugging
327and profiling facilities which are very useful for code development.
329On Compaq alpha server (Digital Unix V4.0) the default compiler is cxx
330( Compaq C++ V6.2-024 for Digital UNIX V4.0F). Alpha provides also its
331profiling tool pixie, which works well with shared libraries. AliRoot
332works also on alpha server running Linux, where the compiler is gcc.
334Recently AliRoot was ported to MacOS (Darwin). This OS is very
335sensitive to the circular dependences in the shared libraries, which
336makes it very useful as test platform.
338% -----------------------------------------------------------------------------
340\subsection{Essential CVS information}
342CVS\cite{CVS} stands for Concurrent Version System. It permits to a
343group of people to work simultaneously on groups of files (for
344instance program sources). It also records the history of files, which
345allows back tracking and file versioning. The official CVS Web page is
346\url{}. CVS has a host of features, among them
347the most important are:
349\item CVS facilitates parallel and concurrent code development;
350\item it provides easy support and simple access;
351\item it has possibility to establish group permissions (for example
352 only detector experts and CVS administrators can commit code to
353 given detector module).
355CVS has rich set of commands, the most important are described below.
356There exist several tools for visualization, logging and control which
357work with CVS. More information is available in the CVS documentation
358and manual\cite{CVSManual}.
360Usually the development process with CVS has the following features:
362\item all developers work on their \underline{own} copy of the project
363 (in one of their directories)
364\item they often have to \underline{synchronize} with a global
365 repository both to update with modifications from other people and
366 to commit their own changes.
369Here below we give an example of a typical CVS session
372 # Login to the repository. The password is stored in ~/.cvspass
373 # If no cvs logout is done, the password remains there and
374 # one can access the repository without new login
375 % cvs -d login
376 (Logging in to
377 CVS password:
378 xxxxxxxx
380 # Check-Out a local version of the TPC module
381 % cvs -d checkout TPC
382 cvs server: Updating TPC
383 U TPC/.rootrc
384 U TPC/AliTPC.cxx
385 U TPC/AliTPC.h
386 ...
388 # edit file AliTPC.h
389 # compile and test modifications
391 # Commit your changes to the repository with an appropriate comment
392 % cvs commit -m "add include file xxx.h" AliTPC.h
393 Checking in AliTPC.h;
394 /soft/cvsroot/AliRoot/TPC/AliTPC.h,v <-- AliTPC.h
395 new revision: 1.9; previous revision:1.8
396 done
400Instead of specifying the repository and user name by -d option, one
401can export the environment variable CVSROOT, for example
404 % export
407Once the local version has been checked out, inside the directory tree
408the CVSROOT is not needed anymore. The name of the actual repository
409can be found in CVS/Root file. This name can be redefined again using
410the -d option.
412In case somebody else has committed some changes in AliTPC.h file, the
413developer have to update the local version merging his own changes
414before committing them:
417 % cvs commit -m "add include file xxx.h" AliTPC.h
418 cvs server: Up-to-date check failed for `AliTPC.h'
419 cvs [server aborted]: correct above errors first!
421 % cvs update
422 cvs server: Updating .
423 RCS file: /soft/cvsroot/AliRoot/TPC/AliTPC.h,v
424 retrieving revision 1.9
425 retrieving revision 1.10
426 Merging differences between 1.9 and 1.10 into AliTPC.h
428 M AliTPC.h
429 # edit, compile and test modifications
431 % cvs commit -m "add include file xxx.h" AliTPC.h
432 Checking in AliTPC.h;
433 /soft/cvsroot/AliRoot/TPC/AliTPC.h,v <-- AliTPC.h
434 new revision: 1.11; previous revision: 1.10
435 done
438\textbf{Important note:} CVS performs a purely mechanical merging, and
439it is the developer's to verify the result of this operation. It is
440especially true in case of conflicts, when the CVS tool is not able to
441merge the local and remote modifications consistently.
444\subsection{Main CVS commands}
446In the following examples we suppose that the CVSROOT environment
447variable is set, as it was shown above. In case a local version has
448been already checked out, the CVS repository is defined automatically
449inside the directory tree.
452\item\textbf{login} stores password in .cvspass. It is enough to login
453 once to the repository.
455\item\textbf{checkout} retrieves the source files of AliRoot version v4-04-Rev-08
456 \begin{lstlisting}[language=sh]
457 % cvs co -r v4-04-Rev-08 AliRoot
458 \end{lstlisting}
460\item\textbf{update} retrieves modifications from the repository and
461 merges them with the local ones. The -q option reduces the verbose
462 output, and the -z9 sets the compression level during the data
463 transfer. The option -A removes all the ``sticky'' tags, -d removes
464 the obsolete files from the local distribution, and -P retrieves the
465 new files which are missing from the local distribution. In this way
466 the local distribution will be updated to the latest code from the
467 main development branch.
468 \begin{lstlisting}[language=sh]
469 % cvs -qz9 update -AdP STEER
470 \end{lstlisting}
472\item\textbf{diff} shows differences between the local and repository
473 versions of the whole module STEER
474 \begin{lstlisting}[language=sh]
475 % cvs -qz9 diff STEER
476 \end{lstlisting}
478\item \textbf{add} adds files or directories to the repository. The
479 actual transfer is done when the commit command is invoked.
480 \begin{lstlisting}[language=sh]
481 % cvs -qz9 add AliTPCseed.*
482 \end{lstlisting}
484\item\textbf{remove} removes old files or directories from the
485 repository. The -f option forces the removal of the local files. In
486 the example below the whole module CASTOR will be scheduled for
487 removal.
488 \begin{lstlisting}[language=sh]
489 % cvs remove -f CASTOR
490 \end{lstlisting}
492\item\textbf{commit} checks in the local modifications to the
493 repository and increments the versions of the files. In the example
494 below all the changes made in the different files of the module
495 STEER will be committed to the repository. The -m option is
496 followed by the log message. In case you don't provide it you will
497 be prompted by an editor window. No commit is possible without the
498 log message which explains what was done.
499 \begin{lstlisting}[language=sh]
500 % cvs -qz9 commit -m ``Coding convention'' STEER
501 \end{lstlisting}
503\item\textbf{tag} creates new tags and/or branches (with -b option).
504 \begin{lstlisting}[language=sh]
505 % cvs tag -b v4-05-Release .
506 \end{lstlisting}
507\item\textbf{status} returns the actual status of a file: revision,
508 sticky tag, dates, options, and local modifications.
509 \begin{lstlisting}[language=sh]
510 % cvs status Makefile
511 \end{lstlisting}
513\item\textbf{logout} removes the password which is stored in
514 \$HOME/.cvspass. It is not really necessary unless the user really
515 wants to remove the password from that account.
519% -----------------------------------------------------------------------------
521\subsection{Environment variables}
523Before the installation of AliRoot the user has to set some
524environment variables. In the following examples the user is working
525on Linux and the default shell is bash. It is enough to add to the
526.bash\_profile file few lines as shown below:
529 # ROOT
530 export ROOTSYS=/home/mydir/root
531 export PATH=$PATH\:$ROOTSYS/bin
534 # AliRoot
535 export ALICE=/home/mydir/alice
536 export ALICE_ROOT=$ALICE/AliRoot
537 export ALICE_TARGET=`root-config --arch`
538 export PATH=$PATH\:$ALICE_ROOT/bin/tgt_${ALICE_TARGET}
541 # Geant3
542 export PLATFORM=`root-config --arch` # Optional, defined otherwise in Geant3 Makefile
543 export
546 # FLUKA
547 export FLUPRO=$ALICE/fluka # $FLUPRO is used in TFluka
548 export PATH=$PATH\:$FLUPRO/flutil
550 # Geant4: see the details later
553where ``/home/mydir'' has to be replaced with the actual directory
554path. The meaning of the environment variables is the following:
556\texttt{ROOTSYS} -- the place where the ROOT package is located;
558\texttt{ALICE} -- top directory for all the software packages used in ALICE;
560\texttt{ALICE\_ROOT} -- the place where the AliRoot package is located, usually
561as subdirectory of ALICE;
563\texttt{ALICE\_TARGET} -- specific platform name. Up to release
564v4-01-Release this variable was set to the result of ``uname''
565command. Starting from AliRoot v4-02-05 the ROOT naming schema was
566adopted, and the user has to use ``root-config --arch'' command.
568\texttt{PLATFORM} -- the same as ALICE\_TARGET for the GEANT~3
569package. Until GEANT~3 v1-0 the user had to use `uname` to specify the
570platform. From version v1-0 on the ROOT platform is used instead
571(``root-config --arch''). This environment variable is set by default
572in the Geant3 Makefile.
575% -----------------------------------------------------------------------------
577\subsection{Software packages}
581The installation of AliEn is the first one to be done if you plan to
582access the GRID or need GRID-enabled Root. You can download the AliEn
583installer and use it in the following way:
584 \begin{lstlisting}[language=sh, title={AliEn installation}]
585 % wget
586 % chmod +x alien-installer
587 % ./alien-installer
588 \end{lstlisting}
589The alien-installer runs a dialog which prompts for the default
590selection and options. The default installation place for AliEn is
591/opt/alien, and the typical packages one has to install are ``client''
592and ``gshell''.
596All ALICE offline software is based on ROOT\cite{ROOT}. The ROOT
597framework offers a number of important elements which are exploited in
601\item a complete data analysis framework including all the PAW
602 features;
603\item an advanced Graphic User Interface (GUI) toolkit;
604\item a large set of utility functions, including several commonly
605 used mathematical functions, random number generators,
606 multi-parametric fit and minimization procedures;
607\item a complete set of object containers;
608\item integrated I/O with class schema evolution;
609\item C++ as a scripting language;
610\item documentation tools.
612There is a nice ROOT user's guide which incorporates important and
613detailed information. For those who are not familiar with ROOT a good
614staring point is the ROOT Web page at \url{}. Here
615the experienced users may find easily the latest version of the class
616descriptions and search for useful information.
619The recommended way to install ROOT is from the CVS sources, as it is
620shown below:
623\item Login to the ROOT CVS repository if you haven't done it yet.
624 \begin{lstlisting}[language=sh]
625 % cvs -d login
626 % CVS password: cvs
627 \end{lstlisting}
629\item Download (check out) the needed ROOT version (v5-13-04 in the example)
630 \begin{lstlisting}[language=sh]
631 % cvs -d co -r v5-13-04 root
632 \end{lstlisting}
633 The appropriate combinations of Root, Geant3 and AliRoot versions
634 can be found at
635 \url{}
637\item The code is stored in the directory ``root''. You have to go
638 there, set the ROOTSYS environment variable (if this is not done in
639 advance),and configure ROOT. The ROOTSYS contains the full path to
640 the ROOT directory.
642 \lstinputlisting[language=sh, title={Root configuration}]{scripts/confroot}
644\item Now you can compile and test ROOT
645 \lstinputlisting[language=sh,title={Compiling and testing
646 ROOT}]{scripts/makeroot}
650At this point the user should have a working ROOT version on a Linux
651(32 bit Pentium processor with gcc compiler). The list of supported
652platforms can be obtained by ``./configure --help'' command.
656The installation of GEANT~3 is needed since for the moments this is
657the default particle transport package. A GEANT~3 description is
658available at
660You can download the GEANT~3 distribution from the ROOT CVS repository
661and compile it in the following way:
663\lstinputlisting[language=sh,title={Make GEANT3}]{scripts/makeg3}
665Please note that GEANT~3 is downloaded in \$ALICE directory. Another
666important feature is the PLATFORM environment variable. If it is not
667set, the Geant3 Makefile sets it to the result of `root-config
671To use GEANT~4\cite{Geant4}, some additional software has to
672be installed. GEANT~4 needs CLHEP\cite{CLHEP} package, the user can
673get the tar file (here on ``tarball'') from
675 Then the installation can be done in the following way:
677\lstinputlisting[language=sh, title={Make CLHEP}]{scripts/makeclhep}
680Another possibility is to use the CLHEP CVS repository:
682\lstinputlisting[language=sh, title={Make CLHEP from
683 CVS}]{scripts/makeclhepcvs}
685Now the following lines should be added to the .bash\_profile
691The next step is to install GEANT~4. The GEANT~4 distribution is available from
692\url{}. Typically the following files
693will be downloaded (the current versions may differ from the ones below):
695\item geant4.8.1.p02.tar.gz: source tarball
696\item G4NDL.3.9.tar.gz: G4NDL version 3.9 neutron data files with thermal cross sections
697\item G4EMLOW4.0.tar.gz: data files for low energy electromagnetic processes - version 4.0
698\item PhotonEvaporation.2.0.tar.gz: data files for photon evaporation - version 2.0
699\item RadiativeDecay.3.0.tar.gz: data files for radioactive decay hadronic processes - version 3.0
700\item G4ELASTIC.1.1.tar.gz: data files for high energy elastic scattering processes - version 1.1
703Then the following steps have to be executed:
705\lstinputlisting[language=sh, title={Make GEANT4}]{scripts/makeg4}
707The execution of the script can be made from the
708\texttt{\~{}/.bash\_profile} to have the GEANT~4 environment variables
709initialized automatically.
713The installation of FLUKA\cite{FLUKA} consists of the following steps:
717\item register as FLUKA user at \url{} if you
718 haven't yet done so. You will receive your ``fuid'' number and will set
719 you password;
721\item download the latest FLUKA version from
722 \url{}. Use your ``fuid'' registration and
723 password when prompted. You will obtain a tarball containing the
724 FLUKA libraries, for example fluka2006.3-linuxAA.tar.gz
726\item install the libraries;
728 \lstinputlisting[language=sh, title={install FLUKA}]{scripts/makefluka}
730\item compile TFluka;
732 \begin{lstlisting}[language=sh]
733 % cd $ALICE_ROOT
734 % make all-TFluka
735 \end{lstlisting}
737\item run AliRoot using FLUKA;
738 \begin{lstlisting}[language=sh]
739 % cd $ALICE_ROOT/TFluka/scripts
740 % ./
741 \end{lstlisting}
743 This script creates the directory tmp and inside all the necessary
744 links for data and configuration files and starts aliroot. For the
745 next run it is not necessary to run the script again. The tmp
746 directory can be kept or renamed. The user should run aliroot from
747 inside this directory.
749\item from the AliRoot prompt start the simulation;
750 \begin{lstlisting}[language=C++]
751 root [0] AliSimulation sim;
752 root [1] sim.Run();
753 \end{lstlisting}
755 You will get the results of the simulation in the tmp directory.
757\item reconstruct the simulated event;
758 \begin{lstlisting}[language=sh]
759 % cd tmp
760 % aliroot
761 \end{lstlisting}
763 and from the AliRoot prompt
764 \begin{lstlisting}[language=C++]
765 root [0] AliReconstruction rec;
766 root [1] rec.Run();
767 \end{lstlisting}
769\item report any problem you encounter to the offline list \url{}.
776The AliRoot distribution is taken from the CVS repository and then
778 % cd $ALICE
779 % cvs -qz2 -d co AliRoot
780 % cd $ALICE_ROOT
781 % make
784The AliRoot code (the above example retrieves the HEAD version from CVS) is contained in
785ALICE\_ROOT directory. The ALICE\_TARGET is defined automatically in
786the \texttt{.bash\_profile} via the call to `root-config --arch`.
792While developing code or running some ALICE program, the user may be
793confronted with the following execution errors:
796\item floating exceptions: division by zero, sqrt from negative
797 argument, assignment of NaN, etc.
798\item segmentation violations/faults: attempt to access a memory
799 location that is not allowed to access, or in a way which is not
800 allowed.
801\item bus error: attempt to access memory that the computer cannot
802 address.
805In this case, the user will have to debug the program to determine the
806source of the problem and fix it. There is several debugging
807techniques, which are briefly listed below:
810\item using \texttt{printf(...)}, \texttt{std::cout}, \texttt{assert(...)}, and
811 \texttt{AliDebug}.
812 \begin{itemize}
813 \item often this is the only easy way to find the origin of the
814 problem;
815 \item \texttt{assert(...)} aborts the program execution if the
816 argument is FALSE. It is a macro from \texttt{cassert}, it can be
817 inactivated by compiling with -DNDEBUG.
818 \end{itemize}
819\item using gdb
820 \begin{itemize}
821 \item gdb needs compilation with -g option. Sometimes -O2 -g
822 prevents from exact tracing, so it is save to use compilation with
823 -O0 -g for debugging purposes;
824 \item One can use it directly (gdb aliroot) or attach it to a
825 process (gdb aliroot 12345 where 12345 is the process id).
826 \end{itemize}
829Below we report the main gdb commands and their descriptions:
832\item \textbf{run} starts the execution of the program;
833\item \textbf{Control-C} stops the execution and switches to the gdb shell;
834\item \textbf{where <n>} prints the program stack. Sometimes the program
835 stack is very long. The user can get the last n frames by specifying
836 n as a parameter to where;
837\item \textbf{print} prints the value of a variable or expression;
839 \begin{lstlisting}[language=sh]
840 (gdb) print *this
841 \end{lstlisting}
842\item \textbf{up} and \textbf{down} are used to navigate in the program stack;
843\item \textbf{quit} exits the gdb session;
844\item \textbf{break} sets break point;
846 \begin{lstlisting}[language=C++]
847 (gdb) break AliLoader.cxx:100
848 (gdb) break 'AliLoader::AliLoader()'
849 \end{lstlisting}
851 The automatic completion of the class methods via tab is available
852 in case an opening quote (`) is put in front of the class name.
854\item \textbf{cont} continues the run;
855\item \textbf{watch} sets watchpoint (very slow execution). The example below
856 shows how to check each change of fData;
858 \begin{lstlisting}[language=C++]
859 (gdb) watch *fData
860 \end{lstlisting}
861\item \textbf{list} shows the source code;
862\item \textbf{help} shows the description of commands.
868Profiling is used to discover where the program spends most of the
869time, and to optimize the algorithms. There are several profiling
870tools available on different platforms:
872\item Linux tools:\\
873 gprof: compilation with -pg option, static libraries\\
874 oprofile: uses kernel module\\
875 VTune: instruments shared libraries.
876\item Sun: Sun workshop (Forte agent). It needs compilation with
877 profiling option (-pg)
878\item Compaq Alpha: pixie profiler. Instruments shared libraries for profiling.
881On Linux AliRoot can be built with static libraries using the special
882target ``profile''
885 % make profile
886 # change LD_LIBRARY_PATH to replace lib/tgt_linux with lib/tgt_linuxPROF
887 # change PATH to replace bin/tgt_linux with bin/tgt_linuxPROF
888 % aliroot
889 root [0] gAlice->Run()
890 root [1] .q
893After the end of aliroot session a file called gmon.out will be created. It
894contains the profiling information which can be investigated using
898 % gprof `which aliroot` | tee gprof.txt
899 % more gprof.txt
904\textbf{VTune profiling tool}
906VTune is available from the Intel Web site
907\url{}. It is free for
908non-commercial use on Linux. It provides possibility for call-graph
909and sampling profiling. VTune instruments shared libraries, and needs
910only -g option during the compilation. Here is an example of
911call-graph profiling:
914 # Register an activity
915 % vtl activity sim -c callgraph -app aliroot,'' -b -q sim.C'' -moi aliroot
916 % vtl run sim
917 % vtl show
918 % vtl view sim::r1 -gui
921\subsection{Detection of run time errors}
923The Valgrind tool can be used for detection of run time errors on
924linux. It is available from \url{}. Valgrind
925is equipped with the following set of tools:
927\item memcheck for memory management problems;
928\item addrcheck: lightweight memory checker;
929\item cachegrind: cache profiler;
930\item massif: heap profiler;
931\item hellgrind: thread debugger;
932\item callgrind: extended version of cachegrind.
935The most important tool is memcheck. It can detect:
937\item use of non-initialized memory;
938\item reading/writing memory after it has been free'd;
939\item reading/writing off the end of malloc'd blocks;
940\item reading/writing inappropriate areas on the stack;
941\item memory leaks -- where pointers to malloc'd blocks are lost forever;
942\item mismatched use of malloc/new/new [] vs free/delete/delete [];
943\item overlapping source and destination pointers in memcpy() and
944 related functions;
945\item some misuses of the POSIX pthreads API;
948Here is an example of Valgrind usage:
951 % valgrind --tool=addrcheck --error-limit=no aliroot -b -q sim.C
955%\textbf{ROOT memory checker}
957% The ROOT memory checker provides tests of memory leaks and other
958% problems related to new/delete. It is fast and easy to use. Here is
959% the recipe:
960% \begin{itemize}
961% \item link aliroot with -lNew. The user has to add `\-\-new' before
962% `\-\-glibs' in the ROOTCLIBS variable of the Makefile;
963% \item add Root.MemCheck: 1 in .rootrc
964% \item run the program: aliroot -b -q sim.C
965% \item run memprobe -e aliroot
966% \item Inspect the files with .info extension that have been generated.
967% \end{itemize}
969\subsection{Useful information LSF and CASTOR}
971\textbf{The information in this section is included for completeness: the
972 users are strongly advised to rely on the GRID tools for massive
973 productions and data access}
975LSF is the batch system at CERN. Every user is allowed to submit jobs
976to the different queues. Usually the user has to copy some input files
977(macros, data, executables, libraries) from a local computer or from
978the mass-storage system to the worker node on lxbatch, then to execute
979the program, and to store the results on the local computer or in the
980mass-storage system. The methods explained in the section are suitable
981if the user doesn't have direct access to a shared directory, for
982example on AFS. The main steps and commands are described below.
984In order to have access to the local desktop and to be able to use scp
985without password, the user has to create pair of SSH keys. Currently
986lxplus/lxbatch uses RSA1 cryptography. After login into lxplus the
987following has to be done:
990 % ssh-keygen -t rsa1
991 # Use empty password
992 % cp .ssh/ public/authorized_keys
993 % ln -s ../public/authorized_keys .ssh/authorized_keys
996A list of useful LSF commands is given bellow:
998\item \textbf{bqueues} shows the available queues and their status;
999\item \textbf{ bsub -q 8nm} submits the shell script to
1000 the queue 8nm, where the name of the queue indicates the
1001 ``normalized CPU time'' (maximal job duration 8 min of normalized CPU time);
1002\item \textbf{bjobs} lists all unfinished jobs of the user;
1003\item \textbf{lsrun -m lxbXXXX xterm} returns a xterm running on the
1004 batch node lxbXXXX. This permits to inspect the job output and to
1005 debug a batch job.
1008Each batch job stores the output in directory LSFJOB\_XXXXXX, where
1009XXXXXX is the job id. Since the home directory is on AFS, the user has
1010to redirect the verbose output, otherwise the AFS quota might be
1011exceeded and the jobs will fail.
1013The CERN mass storage system is CASTOR2\cite{CASTOR2}. Every user has
1014his/her own CASTOR2 space, for example /castor/
1015The commands of CASTOR2 start with prefix ``ns'' of ``rf''. Here is
1016very short list of useful commands:
1019\item \textbf{nsls /castor/} lists the CASTOR
1020 space of user phristov;
1021\item \textbf{rfdir /castor/} the same as
1022 above, but the output is in long format;
1023\item \textbf{nsmkdir test} creates a new directory (test) in the
1024 CASTOR space of the user;
1025\item \textbf{rfcp /castor/ .}
1026 copies the file from CASTOR to the local directory. If the file is
1027 on tape, this will trigger the stage-in procedure, which might take
1028 some time.
1029\item \textbf{rfcp AliESDs.root /castor/}
1030 copies the local file AliESDs.root to CASTOR in the subdirectory
1031 test and schedules it for migration to tape.
1034The user also has to be aware, that the behavior of CASTOR depends on
1035the environment variables RFIO\_USE\_CASTOR\_V2(=YES),
1036STAGE\_HOST(=castoralice) and STAGE\_SVCCLASS(=default). They are set
1037by default to the values for the group (z2 in case of ALICE).
1039Below the user can find an example of job, where the simulation and
1040reconstruction are run using the corresponding macros sim.C and rec.C.
1041An example of such macros will be given later.
1043\lstinputlisting[language=sh,title={LSF example job}]{scripts/lsfjob}
1048\section{Simulation} \label{Simulation}
1050% -----------------------------------------------------------------------------
1053Heavy-ion collisions produce a very large number of particles in the
1054final state. This is a challenge for the reconstruction and analysis
1055algorithms. The detector design and the development of these algorithms requires a predictive
1056and precise simulation of the detector response. Model predictions
1057discussed in the first volume of Physics Performance Report for the
1058charged multiplicity at LHC in \mbox{Pb--Pb} collisions vary from 1400
1059to 8000 particles in the central unit of rapidity. The experiment was
1060designed when the highest available nucleon--nucleon center-of-mass energy
1061heavy-ion interactions was at $20 \, {\rm GeV}$ per nucleon--nucleon
1062pair at CERN SPS, i.e. a factor of about 300 less than the energy at
1063LHC. Recently, the RHIC collider came online. Its top energy of
1064$200\, {\rm GeV}$ per nucleon--nucleon pair is still 30 times less
1065than the LHC energy. The RHIC data seem to suggest that the LHC
1066multiplicity will be on the lower side of the interval. However, the
1067extrapolation is so large that both the hardware and software of ALICE
1068have to be designed for the highest multiplicity. Moreover, as the
1069predictions of different generators of heavy-ion collisions differ
1070substantially at LHC energies, we have to use several of them and
1071compare the results.
1073The simulation of the processes involved in the transport through the
1074detector of the particles emerging from the interaction is confronted
1075with several problems:
1077\begin {itemize}
1078\item existing event generators give different answers on parameters
1079 such as expected multiplicities, $p_T$-dependence and rapidity
1080 dependence at LHC energies.
1082\item most of the physics signals, like hyperon production, high-$p_T$
1083 phenomena, open charm and beauty, quarkonia etc., are not exactly
1084 reproduced by the existing event generators.
1086\item simulation of small cross-sections would demand prohibitively
1087 high computing resources to simulate a number of events that is commensurable with
1088 the expected number of detected events in the experiment.
1090\item the existing generators do not provide for event topologies like
1091 momentum correlations, azimuthal flow etc.
1092\end {itemize}
1094To allow nevertheless efficient simulations we have adopted a
1095framework that allows for a number of options:
1099\item{} the simulation framework provides an interface to external
1100 generators, like HIJING~\cite{MC:HIJING} and
1101 DPMJET~\cite{MC:DPMJET}.
1103\item{} a parameterized, signal-free, underlying event where the
1104 produced multiplicity can be specified as an input parameter is
1105 provided.
1107\item{} rare signals can be generated using the interface to external
1108 generators like PYTHIA or simple parameterizations of transverse
1109 momentum and rapidity spectra defined in function libraries.
1111\item{} the framework provides a tool to assemble events from
1112 different signal generators (event cocktails).
1114\item{} the framework provides tools to combine underlying events and
1115 signal events at the primary particle level (cocktail) and at the
1116 summable digit level (merging).
1118\item{} ``afterburners'' are used to introduce particle correlations in a
1119 controlled way. An afterburner is a program which changes the
1120 momenta of the particles produced by another generator, and thus
1121 modifies as desired the multi-particle momentum distributions.
1124The implementation of this strategy is described below. The results of
1125different \MC generators for heavy-ion collisions are
1126described in section~\ref{MC:Generators}.
1128\subsection{Simulation framework}
1130The simulation framework covers the simulation of primary collisions
1131and generation of the emerging particles, the transport of particles
1132through the detector, the simulation of energy depositions (hits) in
1133the detector components, their response in form of so called summable
1134digits, the generation of digits from summable digits with the
1135optional merging of underlying events and the creation of raw data.
1136The \class{AliSimulation} class provides a simple user interface to
1137the simulation framework. This section focuses on the simulation
1138framework from the (detector) software developers point of view.
1141 \centering
1142 \includegraphics[width=10cm]{picts/SimulationFramework}
1143 \caption{Simulation framework.} \label{MC:Simulation}
1148\textbf{Generation of Particles}
1150Different generators can be used to produce particles emerging from
1151the collision. The class \class{AliGenerator} is the base class
1152defining the virtual interface to the generator programs. The
1153generators are described in more detail in the ALICE PPR Volume 1 and
1154in the next chapter.
1157\textbf{Virtual Monte Carlo}
1159The simulation of particles traversing the detector components is
1160performed by a class derived from \class{TVirtualMC}. The Virtual
1161Monte Carlo also provides an interface to construct the geometry of
1162detectors. The task of the geometry description is done by the
1163geometrical modeler \class{TGeo}. The concrete implementation of the
1164virtual Monte Carlo application TVirtualMCApplication is AliMC. The
1165Monte Carlos used in ALICE are GEANT~3.21, GEANT~4 and FLUKA. More
1166information can be found on the VMC Web page:
1169As explained above, our strategy was to develop a virtual interface to
1170the detector simulation code. We call the interface to the transport
1171code virtual Monte Carlo. It is implemented via C++ virtual classes
1172and is schematically shown in Fig.~\ref{MC:vmc}. The codes that
1173implement the abstract classes are real C++ programs or wrapper
1174classes that interface to FORTRAN programs.
1177 \centering
1178 \includegraphics[width=10cm]{picts/vmc}
1179 \caption{Virtual \MC} \label{MC:vmc}
1182Thanks to the virtual Monte Carlo we have converted all FORTRAN user
1183code developed for GEANT~3 into C++, including the geometry definition
1184and the user scoring routines, \texttt{StepManager}. These have been
1185integrated in the detector classes of the AliRoot framework. The
1186output of the simulation is saved directly with ROOT I/O, simplifying
1187the development of the digitization and reconstruction code in C++.
1190\textbf{Modules and Detectors}
1192Each module of the ALICE detector is described by a class derived from
1193\class{AliModule}. Classes for active modules (= detectors) are not
1194derived directly from \class{AliModule} but from its subclass
1195\class{AliDetector}. These base classes define the interface to the
1196simulation framework via a set of virtual methods.
1199\textbf{Configuration File (Config.C)}
1201The configuration file is a C++ macro that is processed before the
1202simulation starts. It creates and configures the Monte Carlo object,
1203the generator object, the magnetic field map and the detector modules.
1204A detailed description is given below.
1207\textbf{Detector Geometry}
1209The virtual Monte Carlo application creates and initializes the
1210geometry of the detector modules by calling the virtual functions
1211\method{CreateMaterials}, \method{CreateGeometry}, \method{Init} and
1215\textbf{Vertexes and Particles}
1217In case the simulated event is intended to be merged with an
1218underlying event, the primary vertex is taken from the file containing
1219the underlying event by using the vertex generator
1220\class{AliVertexGenFile}. Otherwise the primary vertex is generated
1221according to the generator settings. Then the particles emerging from
1222the collision are generated and put on the stack (an instance of
1223\class{AliStack}). The transport of particles through the detector is
1224performed by the Monte Carlo object. The decay of particles is usually
1225handled by the external decayer \class{AliDecayerPythia}.
1228\textbf{Hits and Track References}
1230The Monte Carlo simulates the transport of a particle step by step.
1231After each step the virtual method \method{StepManager} of the module
1232in which the particle currently is located is called. In this step
1233manager method, the hits in the detector are created by calling
1234\method{AddHit}. Optionally also track references (location and
1235momentum of simulated particles at selected places) can be created by
1236calling \method{AddTackReference}. \method{AddHit} has to be
1237implemented by each detector whereas \method{AddTackReference} is
1238already implemented in AliModule. The container and the branch for the
1239hits -- and for the (summable) digits -- are managed by the detector
1240class via a set of so-called loaders. The relevant data members and
1241methods are fHits, fDigits, \method{ResetHits}, \method{ResetSDigits},
1242\method{ResetDigits},\method{MakeBranch} and \method{SetTreeAddress}.
1244For each detector methods like \method{PreTrack}, \method{PostTrack},
1245\method{FinishPrimary}, \method{FinishEvent} and \method{FinishRun}
1246are called during the simulation when the conditions indicated by the
1247method names are fulfilled.
1250\textbf{Summable Digits}
1252Summable digits are created by calling the virtual method Hits2SDigits
1253of a detector. This method loops over all events, creates the summable
1254digits from hits and stores them in the sdigits file(s).
1257\textbf{ Digitization and Merging}
1259Dedicated classes derived from \class{AliDigitizer} are used for the
1260conversion of summable digits into digits. Since \class{AliDigitizer}
1261is a \class{TTask}, this conversion is done for
1262the current event by the \method{Exec} method. Inside this method the summable
1263digits of all input streams have to be added, combined with noise,
1264converted to digital values taking into account possible thresholds
1265and stored in the digits container.
1267The input streams (more than one in case of merging) as well as the
1268output stream are managed by an object of type \method{AliRunDigitizer}. The
1269methods GetNinputs, GetInputFolderName and GetOutputFolderName return
1270the relevant information. The run digitizer is accessible inside the
1271digitizer via the protected data member fManager. If the flag
1272fRegionOfInterest is set, only detector parts where summable digits
1273from the signal event are present should be digitized. When \MC labels
1274are assigned to digits, the stream-dependent offset given by the
1275method \method{GetMask} is added to the label of the summable digit.
1277The detector specific digitizer object is created in the virtual
1278method CreateDigitizer of the concrete detector class. The run
1279digitizer object is used to construct the detector
1280digitizer. The \method{Init} method of each digitizer is called before the loop
1281over the events starts.
1284A direct conversion from hits directly to digits can be implemented in
1285the method \method{Hits2Digits} of a detector. The loop over the events is
1286inside the method. Of course merging is not supported in this case.
1288An example of simulation script that can be used for simulation of
1289proton-proton collisions is provided below:
1291\begin{lstlisting}[language=C++, title={Simulation run}]
1292 void sim(Int_t nev=100) {
1293 AliSimulation simulator;
1294 // Measure the total time spent in the simulation
1295 TStopwatch timer;
1296 timer.Start();
1297 // List of detectors, where both summable digits and digits are provided
1299 // Direct conversion of hits to digits for faster processing (ITS TPC)
1300 simulator.SetMakeDigitsFromHits("ITS TPC");
1301 simulator.Run(nev);
1302 timer.Stop();
1303 timer.Print();
1304 }
1307The following example shows how one can do event merging
1309\begin{lstlisting}[language=C++, title={Event merging}]
1310 void sim(Int_t nev=6) {
1311 AliSimulation simulator;
1312 // The underlying events are stored in a separate directory.
1313 // Three signal events will be merged in turn with each
1314 // underlying event
1315 simulator.MergeWith("../backgr/galice.root",3);
1316 simulator.Run(nev);
1317 }
1321\textbf{Raw Data}
1323The digits stored in ROOT containers can be converted into the DATE\cite{DATE}
1324format that will be the `payload' of the ROOT classes containing the
1325raw data. This is done for the current event in the method
1326\method{Digits2Raw} of the detector.
1328The simulation of raw data is managed by the class \class{AliSimulation}. To
1329create raw data DDL files it loops over all events. For each event it
1330creates a directory, changes to this directory and calls the method
1331\method{Digits2Raw} of each selected detector. In the Digits2Raw method the DDL
1332files of a detector are created from the digits for the current
1335For the conversion of the DDL files to a DATE file the
1336\class{AliSimulation} class uses the tool dateStream. To create a raw
1337data file in ROOT format with the DATE output as payload the program alimdc is
1340The only part that has to be implemented in each detector is
1341the \method{Digits2Raw} method of the detectors. In this method one file per
1342DDL has to be created obeying the conventions for file names and DDL
1343IDs. Each file is a binary file with a DDL data header in the
1344beginning. The DDL data header is implemented in the structure
1345\class{AliRawDataHeader}. The data member fSize should be set to the total
1346size of the DDL raw data including the size of the header. The
1347attribute bit 0 should be set by calling the method \method{SetAttribute(0)} to
1348indicate that the data in this file is valid. The attribute bit 1 can
1349be set to indicate compressed raw data.
1351The detector-specific raw data are stored in the DDL files after the
1352DDL data header. The format of this raw data should be as close as
1353possible to the one that will be delivered by the detector. This
1354includes the order in which the channels will be read out.
1356Below we show an example of raw data creation for all the detectors
1359 void sim(Int_t nev=1) {
1360 AliSimulation simulator;
1361 // Create raw data for ALL detectors, rootify it and store in the
1362 // file raw,root. Do not delete the intermediate files
1363 simulator.SetWriteRawData("ALL","raw.root",kFALSE);
1364 simulator.Run(nev);
1365 }
1369\subsection{Configuration: example of Config.C}
1371The example below contains as comments the most important information:
1373\lstinputlisting[language=C++] {scripts/Config.C}
1375% -----------------------------------------------------------------------------
1377\subsection{Event generation}
1380 \centering
1381 \includegraphics[width=10cm]{picts/aligen}
1382 \caption{\texttt{AliGenerator} is the base class, which has the
1383 responsibility to generate the primary particles of an event. Some
1384 realizations of this class do not generate the particles themselves
1385 but delegate the task to an external generator like PYTHIA through the
1386 \texttt{TGenerator} interface. }
1387 \label{MC:aligen}
1390\subsubsection{Parameterized generation}
1392The event generation based on parameterization can be used to produce
1393signal-free final states. It avoids the dependences on a
1394specific model, and is efficient and flexible. It can be used to
1395study the track reconstruction efficiency
1396as a function of the initial multiplicity and occupation.
1398\class{AliGenHIJINGparam}~\cite{MC:HIJINGparam} is an example of internal
1399AliRoot generator based on parameterized
1400pseudorapidity density and transverse momentum distributions of
1401charged and neutral pions and kaons. The pseudorapidity
1402distribution was obtained from a HIJING simulation of central
1403Pb--Pb collisions and scaled to a charged-particle multiplicity of
14048000 in the pseudo rapidity interval $|\eta | < 0.5$. Note that
1405this is about 10\% higher than the corresponding value for a
1406rapidity density with an average ${\rm d}N/{\rm d}y$ of 8000 in
1407the interval $|y | < 0.5$.
1408The transverse-momentum distribution is parameterized from the
1409measured CDF pion $p_T$-distribution at $\sqrt{s} = 1.8 \, TeV$.
1410The corresponding kaon $p_T$-distribution was obtained from the
1411pion distribution by $m_T$-scaling. See Ref.~\cite{MC:HIJINGparam}
1412for the details of these parameterizations.
1414In many cases, the expected transverse momentum and rapidity
1415distributions of particles are known. In other cases the effect of
1416variations in these distributions must be investigated. In both
1417situations it is appropriate to use generators that produce
1418primary particles and their decays sampling from parameterized
1419spectra. To meet the different physics requirements in a modular
1420way, the parameterizations are stored in independent function
1421libraries wrapped into classes that can be plugged into the
1422generator. This is schematically illustrated in
1423Fig.~\ref{MC:evglib} where four different generator libraries can
1424be loaded via the abstract generator interface.
1426It is customary in heavy-ion event generation to superimpose
1427different signals on an event to tune the reconstruction
1428algorithms. This is possible in AliRoot via the so-called cocktail
1429generator (Fig.~\ref{MC:cocktail}). This creates events from
1430user-defined particle cocktails by choosing as ingredients a list
1431of particle generators.
1434 \centering
1435 \includegraphics[width=10cm]{picts/evglib}
1436 \caption{\texttt{AliGenParam} is a realization of \texttt{AliGenerator}
1437 that generates particles using parameterized $\pt$ and
1438 pseudo-rapidity distributions. Instead of coding a fixed number of
1439 parameterizations directly into the class implementations, user
1440 defined parameterization libraries (AliGenLib) can be connected at
1441 run time allowing for maximum flexibility.} \label{MC:evglib}
1444An example of \class{AliGenParam} usage is presented below:
1447 // Example for J/psi Production from Parameterization
1448 // using default library (AliMUONlib)
1449 AliGenParam *gener = new AliGenParam(ntracks, AliGenMUONlib::kUpsilon);
1450 gener->SetMomentumRange(0,999); // Wide cut on the Upsilon momentum
1451 gener->SetPtRange(0,999); // Wide cut on Pt
1452 gener->SetPhiRange(0. , 360.); // Full azimutal range
1453 gener->SetYRange(2.5,4); // In the acceptance of the MUON arm
1454 gener->SetCutOnChild(1); // Enable cuts on Upsilon decay products
1455 gener->SetChildThetaRange(2,9); // Theta range for the decay products
1456 gener->SetOrigin(0,0,0); // Vertex position
1457 gener->SetSigma(0,0,5.3); // Sigma in (X,Y,Z) (cm) on IP position
1458 gener->SetForceDecay(kDiMuon); // Upsilon->mu+ mu- decay
1459 gener->SetTrackingFlag(0); // No particle transport
1460 gener->Init()
1463To facilitate the usage of different generators we have developed
1464an abstract generator interface called \texttt{AliGenerator}, see
1465Fig.~\ref{MC:aligen}. The objective is to provide the user with
1466an easy and coherent way to study a variety of physics signals as
1467well as full set of tools for testing and background studies. This
1468interface allows the study of full events, signal processes, and
1469a mixture of both, i.e. cocktail events (see an example later).
1471Several event generators are available via the abstract ROOT class
1472that implements the generic generator interface, \texttt{TGenerator}.
1473Through implementations of this abstract base class we wrap
1474FORTRAN \MC codes like PYTHIA, HERWIG, and HIJING that are
1475thus accessible from the AliRoot classes. In particular the
1476interface to PYTHIA includes the use of nuclear structure
1477functions of LHAPDF.
1482Pythia is used for simulation of proton-proton interactions and for
1483generation of jets in case of event merging. An example of minimum
1484bias Pythia events is presented below:
1487 AliGenPythia *gener = new AliGenPythia(-1);
1488 gener->SetMomentumRange(0,999999);
1489 gener->SetThetaRange(0., 180.);
1490 gener->SetYRange(-12,12);
1491 gener->SetPtRange(0,1000);
1492 gener->SetProcess(kPyMb); // Min. bias events
1493 gener->SetEnergyCMS(14000.); // LHC energy
1494 gener->SetOrigin(0, 0, 0); // Vertex position
1495 gener->SetSigma(0, 0, 5.3); // Sigma in (X,Y,Z) (cm) on IP position
1496 gener->SetCutVertexZ(1.); // Truncate at 1 sigma
1497 gener->SetVertexSmear(kPerEvent);// Smear per event
1498 gener->SetTrackingFlag(1); // Particle transport
1499 gener->Init()
1504HIJING (Heavy-Ion Jet Interaction Generator) combines a
1505QCD-inspired model of jet production~\cite{MC:HIJING} with the
1506Lund model~\cite{MC:LUND} for jet fragmentation. Hard or
1507semi-hard parton scatterings with transverse momenta of a few GeV
1508are expected to dominate high-energy heavy-ion collisions. The
1509HIJING model has been developed with special emphasis on the role
1510of mini jets in pp, pA and A--A reactions at collider energies.
1512Detailed systematic comparisons of HIJING results with a wide
1513range of data demonstrates a qualitative understanding of the
1514interplay between soft string dynamics and hard QCD interactions.
1515In particular, HIJING reproduces many inclusive spectra,
1516two-particle correlations, and the observed flavor and
1517multiplicity dependence of the average transverse momentum.
1519The Lund FRITIOF~\cite{MC:FRITIOF} model and the Dual Parton
1520Model~\cite{MC:DPM} (DPM) have guided the formulation of HIJING
1521for soft nucleus--nucleus reactions at intermediate energies,
1522$\sqrt{s_{\rm NN}}\approx 20\, GeV$. The hadronic-collision
1523model has been inspired by the successful implementation of
1524perturbative QCD processes in PYTHIA~\cite{MC:PYTH}. Binary
1525scattering with Glauber geometry for multiple interactions are
1526used to extrapolate to pA and A--A collisions.
1528Two important features of HIJING are jet quenching and nuclear
1529shadowing. Jet quenching is the energy loss by partons in nuclear
1530matter. It is responsible for an increase of the particle
1531multiplicity at central rapidities. Jet quenching is modeled by an
1532assumed energy loss by partons traversing dense matter. A simple
1533color configuration is assumed for the multi-jet system and the Lund
1534fragmentation model is used for the hadronisation. HIJING does not
1535simulate secondary interactions.
1537Shadowing describes the modification of the free nucleon parton
1538density in the nucleus. At the low-momentum fractions, $x$,
1539observed by collisions at the LHC, shadowing results in a decrease
1540of the multiplicity. Parton shadowing is taken into account using
1541a parameterization of the modification.
1543Here is an example of event generation with HIJING:
1546 AliGenHijing *gener = new AliGenHijing(-1);
1547 gener->SetEnergyCMS(5500.); // center of mass energy
1548 gener->SetReferenceFrame("CMS"); // reference frame
1549 gener->SetProjectile("A", 208, 82); // projectile
1550 gener->SetTarget ("A", 208, 82); // projectile
1551 gener->KeepFullEvent(); // HIJING will keep the full parent child chain
1552 gener->SetJetQuenching(1); // enable jet quenching
1553 gener->SetShadowing(1); // enable shadowing
1554 gener->SetDecaysOff(1); // neutral pion and heavy particle decays switched off
1555 gener->SetSpectators(0); // Don't track spectators
1556 gener->SetSelectAll(0); // kinematic selection
1557 gener->SetImpactParameterRange(0., 5.); // Impact parameter range (fm)
1558 gener->Init()
1561\subsubsection{Additional universal generators}
1563The following universal generators are available in AliRoot:
1566\item DPMJET: this is an implementation of the dual parton
1567 model\cite{MC:DPMJET};
1568\item ISAJET: a \MC event generator for pp, $\bar pp$, and $e^=e^-$
1569 reactions\cite{MC:ISAJET};
1570\item HERWIG: \MC package for simulating Hadron Emission
1571 Reactions With Interfering Gluons\cite{MC:HERWIG}.
1574An example of HERWIG configuration in the Config.C is shown below:
1576AliGenHerwig *gener = new AliGenHerwig(-1);
1577// final state kinematic cuts
1579gener->SetPhiRange(0. ,360.);
1580gener->SetThetaRange(0., 180.);
1583// vertex position and smearing
1584gener->SetOrigin(0,0,0); // vertex position
1586gener->SetSigma(0,0,5.6); // Sigma in (X,Y,Z) (cm) on IP position
1587// Beam momenta
1589// Beams
1592// Structure function
1594// Hard scatering
1597// Min bias
1601\subsubsection{Generators for specific studies}
1605MEVSIM~\cite{MC:MEVSIM} was developed for the STAR experiment to
1606quickly produce a large number of A--A collisions for some
1607specific needs -- initially for HBT studies and for testing of
1608reconstruction and analysis software. However, since the user is
1609able to generate specific signals, it was extended to flow and
1610event-by-event fluctuation analysis. A detailed description of
1611MEVSIM can be found in Ref.~\cite{MC:MEVSIM}.
1613MEVSIM generates particle spectra according to a momentum model
1614chosen by the user. The main input parameters are: types and
1615numbers of generated particles, momentum-distribution model,
1616reaction-plane and azimuthal-anisotropy coefficients, multiplicity
1617fluctuation, number of generated events, etc. The momentum models
1618include factorized $p_T$ and rapidity distributions, non-expanding
1619and expanding thermal sources, arbitrary distributions in $y$ and
1620$p_T$ and others. The reaction plane and azimuthal anisotropy is
1621defined by the Fourier coefficients (maximum of six) including
1622directed and elliptical flow. Resonance production can also be
1625MEVSIM was originally written in FORTRAN. It was later integrated into
1626AliRoot. A complete description of the AliRoot implementation of MEVSIM can
1627be found on the web page (\url{}).
1631GeVSim \cite{MC:GEVSIM} is a fast and easy-to-use \MC
1632event generator implemented in AliRoot. It can provide events of
1633similar type configurable by the user according to the specific
1634needs of a simulation project, in particular, that of flow and
1635event-by-event fluctuation studies. It was developed to facilitate
1636detector performance studies and for the test of algorithms.
1637GeVSim can also be used to generate signal-free events to be
1638processed by afterburners, for example HBT processor.
1640GeVSim is based on the MevSim \cite{MC:MEVSIM} event generator
1641developed for the STAR experiment.
1643GeVSim generates a list of particles by randomly sampling a
1644distribution function. The parameters of single-particle spectra
1645and their event-by-event fluctuations are explicitly defined by
1646the user. Single-particle transverse-momentum and rapidity spectra
1647can be either selected from a menu of four predefined
1648distributions, the same as in MevSim, or provided by user.
1650Flow can be easily introduced into simulated events. The parameters of
1651the flow are defined separately for each particle type and can be
1652either set to a constant value or parameterized as a function of
1653transverse momentum and rapidity. Two parameterizations of elliptic
1654flow based on results obtained by RHIC experiments are provided.
1656GeVSim also has extended possibilities for simulating of
1657event-by-event fluctuations. The model allows fluctuations
1658following an arbitrary analytically defined distribution in
1659addition to the Gaussian distribution provided by MevSim. It is
1660also possible to systematically alter a given parameter to scan
1661the parameter space in one run. This feature is useful when
1662analyzing performance with respect to, for example, multiplicity
1663or event-plane angle.
1665The current status and further development of GeVSim code and documentation
1666can be found in Ref.~\cite{MC:Radomski}.
1668\textbf{HBT processor}
1670Correlation functions constructed with the data produced by MEVSIM
1671or any other event generator are normally flat in the region of
1672small relative momenta. The HBT-processor afterburner introduces
1673two particle correlations into the set of generated particles. It
1674shifts the momentum of each particle so that the correlation
1675function of a selected model is reproduced. The imposed
1676correlation effects due to Quantum Statistics (QS) and Coulomb
1677Final State Interactions (FSI) do not affect the single-particle
1678distributions and multiplicities. The event structures before and
1679after passing through the HBT processor are identical. Thus, the
1680event reconstruction procedure with and without correlations is
1681also identical. However, the track reconstruction efficiency, momentum
1682resolution and particle identification need not to be, since
1683correlated particles have a special topology at small relative
1684velocities. We can thus verify the influence of various
1685experimental factors on the correlation functions.
1687The method, proposed by L.~Ray and G.W.~Hoffmann \cite{MC:HBTproc}
1688is based on random shifts of the particle three-momentum within a
1689confined range. After each shift, a comparison is made with
1690correlation functions resulting from the assumed model of the
1691space--time distribution and with the single-particle spectra
1692which should remain unchanged. The shift is kept if the
1693$\chi^2$-test shows better agreement. The process is iterated
1694until satisfactory agreement is achieved. In order to construct
1695the correlation function, a reference sample is made by mixing
1696particles from some consecutive events. Such a method has an
1697important impact on the simulations when at least two events must
1698be processed simultaneously.
1700Some specific features of this approach are important for practical
1703\item{} the HBT processor can simultaneously generate correlations of up
1704 to two particle types (e.g. positive and negative pions).
1705 Correlations of other particles can be added subsequently.
1706\item{} the form of the correlation function has to be parameterized
1707 analytically. One and three dimensional parameterizations are
1708 possible.
1709\item{} a static source is usually assumed. Dynamical effects,
1710 related to
1711 expansion or flow, can be simulated in a stepwise form by repeating
1712 simulations for different values of the space--time parameters
1713 associated with different kinematic intervals.
1714\item{} Coulomb effects may be introduced by one of three
1715 approaches: Gamow
1716 factor, experimentally modified Gamow correction and integrated
1717 Coulomb wave functions for discrete values of the source radii.
1718\item{} Strong interactions are not implemented.
1721The detailed description of the HBT processor can be found
1724\textbf{Flow afterburner}
1726Azimuthal anisotropies, especially elliptic flow, carry unique
1727information about collective phenomena and consequently are
1728important for the study of heavy-ion collisions. Additional
1729information can be obtained studying different heavy-ion
1730observables, especially jets, relative to the event plane.
1731Therefore it is necessary to evaluate the capability of ALICE to
1732reconstruct the event plane and study elliptic flow.
1734Since there is not a well understood microscopic description of
1735the flow effect it cannot be correctly simulated by microscopic
1736event generators. Therefore, to generate events with flow the user has
1737to use event generators based on macroscopic models, like GeVSim
1738\cite{MC:GEVSIM} or an afterburner which can generate flow on top
1739of events generated by event generators based on the microscopic
1740description of the interaction. In the AliRoot framework such a
1741flow afterburner is implemented.
1743The algorithm to apply azimuthal correlation consists in shifting the
1744azimuthal coordinates of the particles. The transformation is given
1745by \cite{MC:POSCANCER}:
1749\varphi \rightarrow \varphi '=\varphi +\Delta \varphi \]
1751\Delta \varphi =\sum _{n}\frac{-2}{n}v_{n}\left( p_{t},y\right)
1752\sin n \times \left( \varphi -\psi \right) \] where \(
1753v_{n}(p_{t},y) \) is the flow coefficient to be obtained, \( n \)
1754is the harmonic number and \( \psi \) is the event-plane angle.
1755Note that the algorithm is deterministic and does not contain any
1756random numbers generation.
1758The value of the flow coefficient can be either constant or parameterized as a
1759function of transverse momentum and rapidity. Two parameterizations
1760of elliptic flow are provided as in GeVSim.
1763 AliGenGeVSim* gener = new AliGenGeVSim(0);
1765 mult = 2000; // Mult is the number of charged particles in |eta| < 0.5
1766 vn = 0.01; // Vn
1768 Float_t sigma_eta = 2.75; // Sigma of the Gaussian dN/dEta
1769 Float_t etamax = 7.00; // Maximum eta
1771 // Scale from multiplicity in |eta| < 0.5 to |eta| < |etamax|
1772 Float_t mm = mult * (TMath::Erf(etamax/sigma_eta/sqrt(2.)) /
1773 TMath::Erf(0.5/sigma_eta/sqrt(2.)));
1775 // Scale from charged to total multiplicity
1776 mm *= 1.587;
1778 // Define particles
1780 // 78% Pions (26% pi+, 26% pi-, 26% p0) T = 250 MeV
1781 AliGeVSimParticle *pp =
1782 new AliGeVSimParticle(kPiPlus, 1, 0.26 * mm, 0.25, sigma_eta) ;
1783 AliGeVSimParticle *pm =
1784 new AliGeVSimParticle(kPiMinus, 1, 0.26 * mm, 0.25, sigma_eta) ;
1785 AliGeVSimParticle *p0 =
1786 new AliGeVSimParticle(kPi0, 1, 0.26 * mm, 0.25, sigma_eta) ;
1788 // 12% Kaons (3% K0short, 3% K0long, 3% K+, 3% K-) T = 300 MeV
1789 AliGeVSimParticle *ks =
1790 new AliGeVSimParticle(kK0Short, 1, 0.03 * mm, 0.30, sigma_eta) ;
1791 AliGeVSimParticle *kl =
1792 new AliGeVSimParticle(kK0Long, 1, 0.03 * mm, 0.30, sigma_eta) ;
1793 AliGeVSimParticle *kp =
1794 new AliGeVSimParticle(kKPlus, 1, 0.03 * mm, 0.30, sigma_eta) ;
1795 AliGeVSimParticle *km =
1796 new AliGeVSimParticle(kKMinus, 1, 0.03 * mm, 0.30, sigma_eta) ;
1798 // 10% Protons / Neutrons (5% Protons, 5% Neutrons) T = 250 MeV
1799 AliGeVSimParticle *pr =
1800 new AliGeVSimParticle(kProton, 1, 0.05 * mm, 0.25, sigma_eta) ;
1801 AliGeVSimParticle *ne =
1802 new AliGeVSimParticle(kNeutron, 1, 0.05 * mm, 0.25, sigma_eta) ;
1804 // Set Elliptic Flow properties
1806 Float_t pTsaturation = 2. ;
1808 pp->SetEllipticParam(vn,pTsaturation,0.) ;
1809 pm->SetEllipticParam(vn,pTsaturation,0.) ;
1810 p0->SetEllipticParam(vn,pTsaturation,0.) ;
1811 pr->SetEllipticParam(vn,pTsaturation,0.) ;
1812 ne->SetEllipticParam(vn,pTsaturation,0.) ;
1813 ks->SetEllipticParam(vn,pTsaturation,0.) ;
1814 kl->SetEllipticParam(vn,pTsaturation,0.) ;
1815 kp->SetEllipticParam(vn,pTsaturation,0.) ;
1816 km->SetEllipticParam(vn,pTsaturation,0.) ;
1818 // Set Direct Flow properties
1820 pp->SetDirectedParam(vn,1.0,0.) ;
1821 pm->SetDirectedParam(vn,1.0,0.) ;
1822 p0->SetDirectedParam(vn,1.0,0.) ;
1823 pr->SetDirectedParam(vn,1.0,0.) ;
1824 ne->SetDirectedParam(vn,1.0,0.) ;
1825 ks->SetDirectedParam(vn,1.0,0.) ;
1826 kl->SetDirectedParam(vn,1.0,0.) ;
1827 kp->SetDirectedParam(vn,1.0,0.) ;
1828 km->SetDirectedParam(vn,1.0,0.) ;
1830 // Add particles to the list
1832 gener->AddParticleType(pp) ;
1833 gener->AddParticleType(pm) ;
1834 gener->AddParticleType(p0) ;
1835 gener->AddParticleType(pr) ;
1836 gener->AddParticleType(ne) ;
1837 gener->AddParticleType(ks) ;
1838 gener->AddParticleType(kl) ;
1839 gener->AddParticleType(kp) ;
1840 gener->AddParticleType(km) ;
1842 // Random Ev.Plane
1844 TF1 *rpa = new TF1("gevsimPsiRndm","1", 0, 360);
1846 gener->SetPtRange(0., 9.) ; // Used for bin size in numerical integration
1847 gener->SetPhiRange(0, 360);
1849 gener->SetOrigin(0, 0, 0); // vertex position
1850 gener->SetSigma(0, 0, 5.3); // Sigma in (X,Y,Z) (cm) on IP position
1851 gener->SetCutVertexZ(1.); // Truncate at 1 sigma
1852 gener->SetVertexSmear(kPerEvent);
1853 gener->SetTrackingFlag(1);
1854 gener->Init()
1857\textbf{Generator for e$^+$e$^-$ pairs in Pb--Pb collisions}
1859In addition to strong interactions of heavy ions in central and
1860peripheral collisions, ultra-peripheral collisions of ions give
1861rise to coherent, mainly electromagnetic, interactions among which
1862the dominant process is is the (multiple) e$^+$e$^-$-pair
1863production \cite{MC:AlscherHT97}
1865 AA \to AA + n({\rm e}^+{\rm e}^-), \label{nee}
1867where $n$ is the pair multiplicity. Most electron--positron pairs
1868are produced into the very forward direction escaping the
1869experiment. However, for Pb--Pb collisions at the LHC the
1870cross-section of this process, about 230 \, ${\rm kb}$, is
1871enormous. A sizable fraction of pairs produced with large-momentum
1872transfer can contribute to the hit rate in the forward detectors
1873increasing the occupancy or trigger rate. In order to study this
1874effect an event generator for e$^+$e$^-$-pair production has
1875been implemented in the AliRoot framework \cite{MC:Sadovsky}. The
1876class \texttt{TEpEmGen} is a realisation of the \texttt{TGenerator}
1877interface for external generators and wraps the FORTRAN code used
1878to calculate the differential cross-section. \texttt{AliGenEpEmv1}
1879derives from \texttt{AliGenerator} and uses the external generator to
1880put the pairs on the AliRoot particle stack.
1883\subsubsection{Combination of generators: AliGenCocktail}
1886 \centering
1887 \includegraphics[width=10cm]{picts/cocktail}
1888 \caption{The \texttt{AliGenCocktail} generator is a realization of {\tt
1889 AliGenerator} which does not generate particles itself but
1890 delegates this task to a list of objects of type {\tt
1891 AliGenerator} that can be connected as entries ({\tt
1892 AliGenCocktailEntry}) at run time. In this way different physics
1893 channels can be combined in one event.} \label{MC:cocktail}
1896Here is an example of cocktail, used for studies in the TRD detector:
1899 // The cocktail generator
1900 AliGenCocktail *gener = new AliGenCocktail();
1902 // Phi meson (10 particles)
1903 AliGenParam *phi =
1904 new AliGenParam(10,new AliGenMUONlib(),AliGenMUONlib::kPhi,"Vogt PbPb");
1905 phi->SetPtRange(0, 100);
1906 phi->SetYRange(-1., +1.);
1907 phi->SetForceDecay(kDiElectron);
1909 // Omega meson (10 particles)
1910 AliGenParam *omega =
1911 new AliGenParam(10,new AliGenMUONlib(),AliGenMUONlib::kOmega,"Vogt PbPb");
1912 omega->SetPtRange(0, 100);
1913 omega->SetYRange(-1., +1.);
1914 omega->SetForceDecay(kDiElectron);
1916 // J/psi
1917 AliGenParam *jpsi = new AliGenParam(10,new AliGenMUONlib(),
1918 AliGenMUONlib::kJpsiFamily,"Vogt PbPb");
1919 jpsi->SetPtRange(0, 100);
1920 jpsi->SetYRange(-1., +1.);
1921 jpsi->SetForceDecay(kDiElectron);
1923 // Upsilon family
1924 AliGenParam *ups = new AliGenParam(10,new AliGenMUONlib(),
1925 AliGenMUONlib::kUpsilonFamily,"Vogt PbPb");
1926 ups->SetPtRange(0, 100);
1927 ups->SetYRange(-1., +1.);
1928 ups->SetForceDecay(kDiElectron);
1930 // Open charm particles
1931 AliGenParam *charm = new AliGenParam(10,new AliGenMUONlib(),
1932 AliGenMUONlib::kCharm,"central");
1933 charm->SetPtRange(0, 100);
1934 charm->SetYRange(-1.5, +1.5);
1935 charm->SetForceDecay(kSemiElectronic);
1937 // Beauty particles: semi-electronic decays
1938 AliGenParam *beauty = new AliGenParam(10,new AliGenMUONlib(),
1939 AliGenMUONlib::kBeauty,"central");
1940 beauty->SetPtRange(0, 100);
1941 beauty->SetYRange(-1.5, +1.5);
1942 beauty->SetForceDecay(kSemiElectronic);
1944 // Beauty particles to J/psi ee
1945 AliGenParam *beautyJ = new AliGenParam(10, new AliGenMUONlib(),
1946 AliGenMUONlib::kBeauty,"central");
1947 beautyJ->SetPtRange(0, 100);
1948 beautyJ->SetYRange(-1.5, +1.5);
1949 beautyJ->SetForceDecay(kBJpsiDiElectron);
1951 // Adding all the components of the cocktail
1952 gener->AddGenerator(phi,"Phi",1);
1953 gener->AddGenerator(omega,"Omega",1);
1954 gener->AddGenerator(jpsi,"J/psi",1);
1955 gener->AddGenerator(ups,"Upsilon",1);
1956 gener->AddGenerator(charm,"Charm",1);
1957 gener->AddGenerator(beauty,"Beauty",1);
1958 gener->AddGenerator(beautyJ,"J/Psi from Beauty",1);
1960 // Settings, common for all components
1961 gener->SetOrigin(0, 0, 0); // vertex position
1962 gener->SetSigma(0, 0, 5.3); // Sigma in (X,Y,Z) (cm) on IP position
1963 gener->SetCutVertexZ(1.); // Truncate at 1 sigma
1964 gener->SetVertexSmear(kPerEvent);
1965 gener->SetTrackingFlag(1);
1966 gener->Init();
1970\subsection{Particle transport}
1972\subsubsection{TGeo essential information}
1974A detailed description of the Root geometry package is available in
1975the Root User's Guide\cite{RootUsersGuide}. Several examples can be
1976found in \$ROOTSYS/tutorials, among them assembly.C, csgdemo.C,
1977geodemo.C, nucleus.C, rootgeom.C, etc. Here we show a simple usage for
1978export/import of the ALICE geometry and for check for overlaps and
1982 aliroot
1983 root [0] gAlice->Init()
1984 root [1] gGeoManager->Export("geometry.root")
1985 root [2] .q
1986 aliroot
1987 root [0] TGeoManager::Import("geometry.root")
1988 root [1] gGeoManager->CheckOverlaps()
1989 root [2] gGeoManager->PrintOverlaps()
1990 root [3] new TBrowser
1991 # Now you can navigate in Geometry->Illegal overlaps
1992 # and draw each overlap (double click on it)
1997Below we show an example of VZERO visualization using the Root
1998geometry package:
2001 aliroot
2002 root [0] gAlice->Init()
2003 root [1] TGeoVolume *top = gGeoManager->GetMasterVolume()
2004 root [2] Int_t nd = top->GetNdaughters()
2005 root [3] for (Int_t i=0; i<nd; i++) \
2006 top->GetNode(i)->GetVolume()->InvisibleAll()
2007 root [4] TGeoVolume *v0ri = gGeoManager->GetVolume("V0RI")
2008 root [5] TGeoVolume *v0le = gGeoManager->GetVolume("V0LE")
2009 root [6] v0ri->SetVisibility(kTRUE);
2010 root [7] v0ri->VisibleDaughters(kTRUE);
2011 root [8] v0le->SetVisibility(kTRUE);
2012 root [9] v0le->VisibleDaughters(kTRUE);
2013 root [10] top->Draw();
2017\subsubsection{Particle decays}
2019We use Pythia to carry one particle decays during the transport. The
2020default decay channels can be seen in the following way:
2023 aliroot
2024 root [0] AliPythia * py = AliPythia::Instance()
2025 root [1] py->Pylist(12); >> decay.list
2028The file decay.list will contain the list of particles decays
2029available in Pythia. Now if we want to force the decay $\Lambda^0 \to
2030p \pi^-$, the following lines should be included in the Config.C
2031before we register the decayer:
2034 AliPythia * py = AliPythia::Instance();
2035 py->SetMDME(1059,1,0);
2036 py->SetMDME(1060,1,0);
2037 py->SetMDME(1061,1,0);
2040where 1059,1060 and 1061 are the indexes of the decay channel (from
2041decay.list above) we want to switch off.
2046\textbf{Fast simulation}
2048This example is taken from the macro
2049\$ALICE\_ROOT/FASTSIM/fastGen.C. It shows how one can create a
2050Kinematics tree which later can be used as input for the particle
2051transport. A simple selection of events with high multiplicity is
2054\lstinputlisting[language=C++] {scripts/fastGen.C}
2056\textbf{Reading of kinematics tree as input for the particle transport}
2058We suppose that the macro fastGen.C above has been used to generate
2059the corresponding sent of files: galice.root and Kinematics.root, and
2060that they are stored in a separate subdirectory, for example kine. Then
2061the following code in Config.C will read the set of files and put them
2062in the stack for transport:
2065 AliGenExtFile *gener = new AliGenExtFile(-1);
2067 gener->SetMomentumRange(0,14000);
2068 gener->SetPhiRange(0.,360.);
2069 gener->SetThetaRange(45,135);
2070 gener->SetYRange(-10,10);
2071 gener->SetOrigin(0, 0, 0); //vertex position
2072 gener->SetSigma(0, 0, 5.3); //Sigma in (X,Y,Z) (cm) on IP position
2074 AliGenReaderTreeK * reader = new AliGenReaderTreeK();
2075 reader->SetFileName("../galice.root");
2077 gener->SetReader(reader);
2078 gener->SetTrackingFlag(1);
2080 gener->Init();
2084\textbf{Usage of different generators}
2086A lot of examples are available in
2087\$ALICE\_ROOT/macros/Config\_gener.C. The correspondent part can be
2088extracted and placed in the relevant Config.C file.
2098% -----------------------------------------------------------------------------
2100\subsection{Reconstruction Framework}
2102This chapter
2103focuses on the reconstruction framework from the (detector) software
2104developers point of view.
2106Wherever it is not specified explicitly as different, we refer
2107to the `global ALICE coordinate system'\cite{CoordinateSystem}. It is a right-handed coordinate
2108system with
2109the $z$ axis coinciding with the beam-pipe axis and going in the direction
2110opposite to the muon arm, the $y$ axis going up, and the origin of
2111coordinates defined by the intersection point of the $z$ axis
2112and the central-membrane plane of TPC.
2114Here is a reminder of the following terms which are used in the
2115description of the reconstruction framework (see also section \ref{AliRootFramework}):
2117\item {\it Digit}: This is a digitized signal (ADC count) obtained by
2118 a sensitive pad of a detector at a certain time.
2119\item {\it Cluster}: This is a set of adjacent (in space and/or in time)
2120 digits that were presumably generated by the same particle crossing the
2121 sensitive element of a detector.
2122\item Reconstructed {\it space point}: This is the estimation of the
2123 position where a particle crossed the sensitive element of a detector
2124 (often, this is done by calculating the center of gravity of the
2125 `cluster').
2126\item Reconstructed {\it track}: This is a set of five parameters (such as the
2127 curvature and the angles with respect to the coordinate axes) of the particle's
2128 trajectory together with the corresponding covariance matrix estimated at a given
2129 point in space.
2133The input to the reconstruction framework are digits in root tree
2134format or raw data format. First a local reconstruction of clusters is
2135performed in each detector. Then vertexes and tracks are reconstructed
2136and the particle identification is carried on. The output of the reconstruction
2137is the Event Summary Data (ESD). The \class{AliReconstruction} class provides
2138a simple user interface to the reconstruction framework which is
2139explained in the source code and.
2142 \centering
2143 \includegraphics[width=10cm]{picts/ReconstructionFramework}
2144 \caption{Reconstruction framework.} \label{MC:Reconstruction}
2147\textbf{Requirements and Guidelines}
2149The development of the reconstruction framework has been carried on
2150according to the following requirements and guidelines:
2152\item the prime goal of the reconstruction is to provide the data that
2153 is needed for a physics analysis;
2154\item the reconstruction should be aimed for high efficiency, purity and resolution.
2155\item the user should have an easy to use interface to extract the
2156 required information from the ESD;
2157\item the reconstruction code should be efficient but also maintainable;
2158\item the reconstruction should be as flexible as possible.
2159 It should be possible to do the reconstruction in one detector even in
2160 the case that other detectors are not operational.
2161 To achieve such a flexibility each detector module should be able to
2162 \begin{itemize}
2163 \item find tracks starting from seeds provided by another detector
2164 (external seeding),
2165 \item find tracks without using information from other detectors
2166 (internal seeding),
2167 \item find tracks from external seeds and add tracks from internal seeds
2168 \item and propagate tracks through the detector using the already
2169 assigned clusters in inward and outward direction.
2170 \end{itemize}
2171\item where it is appropriate, common (base) classes should be used in
2172 the different reconstruction modules;
2173\item the interdependencies between the reconstruction modules should
2174 be minimized.
2175 If possible the exchange of information between detectors should be
2176 done via a common track class.
2177\item the chain of reconstruction program(s) should be callable and
2178 steerable in an easy way;
2179\item there should be no assumptions on the structure or names of files
2180 or on the number or order of events;
2181\item each class, data member and method should have a correct,
2182 precise and helpful html documentation.
2189The interface from the steering class \class{AliReconstruction} to the
2190detector specific reconstruction code is defined by the base class
2191\class{AliReconstructor}. For each detector there is a derived reconstructor
2192class. The user can set options for each reconstructor in format of a
2193string parameter which is accessible inside the reconstructor via the
2194method GetOption.
2196The detector specific reconstructors are created via
2197plugins. Therefore they must have a default constructor. If no plugin
2198handler is defined by the user (in .rootrc), it is assumed that the
2199name of the reconstructor for detector DET is AliDETReconstructor and
2200that it is located in the library (or
2203\textbf{Input Data}
2205If the input data is provided in format of root trees, either the
2206loaders or directly the trees are used to access the digits. In case
2207of raw data input the digits are accessed via a raw reader.
2209If a galice.root file exists, the run loader will be retrieved from
2210it. Otherwise the run loader and the headers will be created from the
2211raw data. The reconstruction can not work if there is no galice.root file
2212and no raw data input.
2215\textbf{Output Data}
2217The clusters (rec. points) are considered as intermediate output and
2218are stored in root trees handled by the loaders. The final output of
2219the reconstruction is a tree with objects of type \class{AliESD} stored in the
2220file AliESDs.root. This Event Summary Data (ESD) contains lists of
2221reconstructed tracks/particles and global event properties. The detailed
2222description of the ESD can be found in section \ref{ESD}.
2225\textbf{Local Reconstruction (Clusterization)}
2227The first step of the reconstruction is the so called ``local
2228reconstruction''. It is executed for each detector separately and
2229without exchanging information with other detectors. Usually the
2230clusterization is done in this step.
2232The local reconstruction is invoked via the method \method{Reconstruct} of the
2233reconstructor object. Each detector reconstructor runs the local
2234reconstruction for all events. The local reconstruction method is
2235only called if the method HasLocalReconstruction of the reconstructor
2236returns kTRUE.
2238Instead of running the local reconstruction directly on raw data, it
2239is possible to first convert the raw data digits into a digits tree
2240and then to call the \method{Reconstruct} method with a tree as input
2241parameter. This conversion is done by the method ConvertDigits. The
2242reconstructor has to announce that it can convert the raw data digits
2243by returning kTRUE in the method \method{HasDigitConversion}.
2248The current reconstruction of the primary-vertex
2249position in ALICE is done using the information provided by the
2250silicon pixel detectors, which constitute the two innermost layers of the
2253The algorithm starts with looking at the
2254distribution of the $z$ coordinates of the reconstructed space points
2255in the first pixel layers.
2256At a vertex $z$ coordinate $z_{\rm true} = 0$ the distribution is
2257symmetric and
2258its centroid ($z_{\rm cen}$) is very close to the nominal
2259vertex position. When the primary vertex is moved along the $z$ axis, an
2260increasing fraction
2261of hits will be lost and the centroid of the distribution no longer gives
2262the primary
2263vertex position. However, for primary vertex locations not too far from
2264$z_{\rm true} = 0$
2265(up to about 12~cm), the centroid of the distribution is still correlated to
2266the true vertex position.
2267The saturation effect at large $z_{\rm true}$ values of the vertex position
2268($z_{\rm true} = $12--15~cm)
2269is, however, not critical, since this procedure is only meant to find a rough
2270vertex position, in order to introduce some cut along $z$.
2272To find the final vertex position,
2273the correlation between the points $z_1$, $z_2$ in the two layers
2274was considered. More details and performance studies are available in
2277The primary vertex is reconstructed by a vertexer object derived from
2278\class{AliVertexer}. After the local reconstruction was done for all detectors
2279the vertexer method \method{FindVertexForCurrentEvent} is called for each
2280event. It returns a pointer to a vertex object of type \class{AliESDVertex}.
2282The vertexer object is created by the method \method{CreateVertexer} of the
2283reconstructor. So far only the ITS is used to determine the primary
2284vertex (\class{AliITSVertexerZ} class).
2286The precision of the primary vertex reconstruction in the bending plane
2287required for the reconstruction of D and B mesons in pp events
2288can be achieved only after the tracking is done. The method is
2289implemented in \class{AliITSVertexerTracks}. It is called as a second
2290estimation of the primary vertex. The details of the algorithm can be
2291found in Appendix \ref{VertexerTracks}.
2294\textbf{Combined Track Reconstruction}
2295The combined track reconstruction tries to accumulate the information from
2296different detectors in order to optimize the track reconstruction performance.
2297The result of this is stored in the combined track objects.
2298The \class{AliESDTrack} class also
2299provides the possibility to exchange information between detectors
2300without introducing dependencies between the reconstruction modules.
2301This is achieved by using just integer indexes pointing to the
2302specific track objects, which on the other hand makes it possible to
2303retrieve the full information if needed.
2304The list of combined tracks can be kept in memory and passed from one
2305reconstruction module to another.
2306The storage of the combined tracks should be done in the standard way.
2308The classes responsible for the reconstruction of tracks are derived
2309from \class{AliTracker}. They are created by the method
2310\method{CreateTracker} of the
2311reconstructors. The reconstructed position of the primary vertex is
2312made available to them via the method \method{SetVertex}. Before the track
2313reconstruction in a detector starts the clusters are loaded from the
2314clusters tree by the method \method{LoadClusters}. After the track reconstruction the
2315clusters are unloaded by the method \method{UnloadClusters}.
2317The track reconstruction (in the barrel part) is done in three passes. The first
2318pass consists of a track finding and fitting in inward direction in
2319TPC and then in ITS. The virtual method \method{Clusters2Tracks} (of
2320class \class{AliTracker}) is the
2321interface to this pass. The method for the next pass is
2322\method{PropagateBack}. It does the track reconstruction in outward direction and is
2323invoked for all detectors starting with the ITS. The last pass is the
2324track refit in inward direction in order to get the track parameters
2325at the vertex. The corresponding method \method{RefitInward} is called for TRD,
2326TPC and ITS. All three track reconstruction methods have an AliESD object as
2327argument which is used to exchange track information between detectors
2328without introducing dependences between the code of the detector
2331Depending on the way the information is used, the tracking methods can be
2332divided into two large groups: global methods and local methods. Each
2333group has advantages and disadvantages.
2335With the global methods, all the track measurements are treated
2336simultaneously and the decision to include or exclude a measurement is
2337taken when all the information about the track is known.
2338Typical algorithms belonging to this class are combinatorial methods,
2339Hough transform, templates, conformal mappings. The advantages are
2340the stability with respect to noise and mismeasurements and the possibility
2341to operate directly on the raw data. On the other hand, these methods
2342require a precise global track model. Such a track model can sometimes be
2343unknown or does not even exist because of stochastic processes (energy
2344losses, multiple scattering), non-uniformity of the magnetic field etc.
2345In ALICE, global tracking methods are being extensively used in the
2346High-Level Trigger (HLT) software. There, we
2347are mostly interested in the reconstruction of the high-momentum tracks
2348only, the required precision is not crucial, but the speed of the
2349calculations is of great importance.
2352Local methods do not need the knowledge of the global track model.
2353The track parameters are always estimated `locally' at a given point
2354in space. The decision to accept or to reject a measurement is made using
2355either the local information or the information coming from the previous
2356`history' of this track. With these methods, all the local track
2357peculiarities (stochastic physics processes, magnetic fields, detector
2358geometry) can be naturally accounted for. Unfortunately, the local methods
2359rely on sophisticated space point reconstruction algorithms (including
2360unfolding of overlapped clusters). They are sensitive to noise, wrong or
2361displaced measurements and the precision of space point error parameterization.
2362The most advanced kind of local track-finding methods is Kalman
2363filtering which was introduced by P. Billoir in 1983~\cite{MC:billoir}.
2367When applied to the track reconstruction problem, the Kalman-filter
2368approach shows many attractive properties:
2371\item It is a method for simultaneous track recognition and
2372 fitting.
2374\item There is a possibility to reject incorrect space points `on
2375 the fly', during a single tracking pass. These incorrect points can
2376 appear as a consequence of the imperfection of the cluster finder or
2377 they may be due to noise or they may be points from other tracks
2378 accidentally captured in the list of points to be associated with
2379 the track under consideration. In the other tracking methods one
2380 usually needs an additional fitting pass to get rid of incorrectly
2381 assigned points.
2383\item In the case of substantial multiple scattering, track
2384 measurements are correlated and therefore large matrices (of the
2385 size of the number of measured points) need to be inverted during
2386 a global fit. In the Kalman-filter procedure we only have to
2387 manipulate up to $5 \times 5$ matrices (although as many times as
2388 we have measured space points), which is much faster.
2390\item One can handle multiple scattering and
2391 energy losses in a simpler way than in the case of global
2392 methods. At each step the material budget can be calculated and the
2393 mean correction calculated accordingly.
2395\item It is a natural way to find the extrapolation
2396 of a track from one detector to another (for example from the TPC
2397 to the ITS or to the TRD).
2401In ALICE we require good track-finding efficiency and reconstruction
2402precision for track down to \mbox{\pt = 100 MeV/$c$.} Some of the ALICE
2403tracking detectors (ITS, TRD) have a significant material budget.
2404Under such conditions one can not neglect the energy losses or the multiple
2405scattering in the reconstruction. There are also rather
2406big dead zones between the tracking detectors which complicates finding
2407the continuation of the same track. For all these reasons,
2408it is the Kalman-filtering approach that has been our choice for the
2409offline reconstruction since 1994.
2411% \subsubsection{General tracking strategy}
2413The reconstruction software for the ALICE central tracking detectors (the
2414ITS, TPC and the TRD) shares a common convention on the coordinate
2415system used. All the clusters and tracks are always expressed in some local
2416coordinate system related to a given sub-detector (TPC sector, ITS module
2417etc). This local coordinate system is defined as the following:
2419\item It is a right handed-Cartesian coordinate system;
2420\item its origin and the $z$ axis coincide with those of the global
2421 ALICE coordinate system;
2422\item the $x$ axis is perpendicular to the sub-detector's `sensitive plane'
2423 (TPC pad row, ITS ladder etc).
2425Such a choice reflects the symmetry of the ALICE set-up
2426and therefore simplifies the reconstruction equations.
2427It also enables the fastest possible transformations from
2428a local coordinate system to the global one and back again,
2429since these transformations become simple single rotations around the
2433The reconstruction begins with cluster finding in all of the ALICE central
2434detectors (ITS, TPC, TRD, TOF, HMPID and PHOS). Using the clusters
2435reconstructed at the two pixel layers of the ITS, the position of the
2436primary vertex is estimated and the track finding starts. As
2437described later, cluster-finding as well as the track-finding procedures
2438performed in the detectors have some different detector-specific features.
2439Moreover, within a given detector, on account of high occupancy and a big
2440number of overlapped clusters, the cluster finding and the track finding are
2441not completely independent: the number and positions of the clusters are
2442completely determined only at the track-finding step.
2444The general tracking strategy is the following. We start from our
2445best tracker device, i.e. the TPC, and from the outer radius where the
2446track density is minimal. First, the track candidates (`seeds') are
2447found. Because of the small number of clusters assigned to a seed, the
2448precision of its parameters is not enough to safely extrapolate it outwards
2449to the other detectors. Instead, the tracking stays within the TPC and
2450proceeds towards the smaller TPC radii. Whenever
2451possible, new clusters are associated with a track candidate
2452at each step of the Kalman filter if they are within a given distance
2453from the track prolongation and the track parameters are more and
2454more refined. When all of the seeds are extrapolated to the inner limit of
2455the TPC, proceeds into the ITS. The ITS tracker tries to prolong
2456the TPC tracks as close as possible to the primary vertex.
2457On the way to the primary vertex, the tracks are assigned additional,
2458precisely reconstructed ITS clusters, which also improves
2459the estimation of the track parameters.
2461After all the track candidates from the TPC are assigned their clusters
2462in the ITS, a special ITS stand-alone tracking procedure is applied to
2463the rest of the ITS clusters. This procedure tries to recover the
2464tracks that were not found in the TPC because of the \pt cut-off, dead zones
2465between the TPC sectors, or decays.
2467At this point the tracking is restarted from the vertex back to the
2468outer layer of the ITS and then continued towards the outer wall of the
2469TPC. For the track that was labeled by the ITS tracker as potentially
2470primary, several particle-mass-dependent, time-of-flight hypotheses
2471are calculated. These hypotheses are then used for the particle
2472identification (PID) with the TOF detector. Once the outer
2473radius of the TPC is reached, the precision of the estimated track
2474parameters is
2475sufficient to extrapolate the tracks to the TRD, TOF, HMPID and PHOS
2476detectors. Tracking in the TRD is done in a similar way to that
2477in the TPC. Tracks are followed till the outer wall of the TRD and the
2478assigned clusters improve the momentum resolution further.
2479% Next, after the
2480% matching with the TOF, HMPID and PHOS is done, and the tracks aquire
2481% additional PID information.
2482Next, the tracks are extrapolated to the TOF, HMPID and PHOS, where they
2483acquire the PID information.
2484Finally, all the tracks are refitted with the Kalman filter backwards to
2485the primary vertex (or to the innermost possible radius, in the case of
2486the secondary tracks). This gives the most precise information about
2487the track parameters at the point where the track appeared.
2489The tracks that passed the final refit towards the primary vertex are used
2490for the secondary vertex (V$^0$, cascade, kink) reconstruction. There is also
2491an option to reconstruct the secondary vertexes `on the fly' during the
2492tracking itself. The potential advantage of such a possibility is that
2493the tracks coming from a secondary vertex candidate are not extrapolated
2494beyond the vertex, thus minimizing the risk of picking up a wrong track
2495prolongation. This option is currently under investigation.
2497The reconstructed tracks (together with the PID information), kink, V$^0$
2498and cascade particle decays are then stored in the Event Summary Data (ESD).
2500More details about the reconstruction algorithms can be found in
2501Chapter 5 of the ALICE Physics Performance Report\cite{PPRVII}.
2504\textbf{Filling of ESD}
2506After the tracks were reconstructed and stored in the \class{AliESD} object,
2507further information is added to the ESD. For each detector the method
2508\method{FillESD} of the reconstructor is called. Inside this method e.g. V0s
2509are reconstructed or particles are identified (PID). For the PID a
2510Bayesian approach is used (see Appendix \ref{BayesianPID}. The constants
2511and some functions that are used for the PID are defined in the class
2515\textbf{Monitoring of Performance}
2517For the monitoring of the track reconstruction performance the classes
2518\class{AliTrackReference} are used.
2519Objects of the second type of class are created during the
2520reconstruction at the same locations as the \class{AliTrackReference}
2522So the reconstructed tracks can be easily compared with the simulated
2524This allows to study and monitor the performance of the track reconstruction in detail.
2525The creation of the objects used for the comparison should not
2526interfere with the reconstruction algorithm and can be switched on or
2529Several ``comparison'' macros permit to monitor the efficiency and the
2530resolution of the tracking. Here is a typical usage (the simulation
2531and the reconstruction have been done in advance):
2534 aliroot
2535 root [0] gSystem->SetIncludePath("-I$ROOTSYS/include \
2536 -I$ALICE_ROOT/include \
2540 root [1] .L $ALICE_ROOT/TPC/AliTPCComparison.C++
2541 root [2] .L $ALICE_ROOT/ITS/AliITSComparisonV2.C++
2542 root [3] .L $ALICE_ROOT/TOF/AliTOFComparison.C++
2543 root [4] AliTPCComparison()
2544 root [5] AliITSComparisonV2()
2545 root [6] AliTOFComparison()
2548Another macro can be used to provide a preliminary estimate of the
2549combined acceptance: \texttt{STEER/CheckESD.C}.
2553The following classes are used in the reconstruction:
2555\item \class{AliTrackReference}:
2556 This class is used to store the position and the momentum of a
2557 simulated particle at given locations of interest (e.g. when the
2558 particle enters or exits a detector or it decays). It is used for
2559 mainly for debugging and tuning of the tracking.
2561\item \class{AliExternalTrackParams}:
2562 This class describes the status of a track in a given point.
2563 It knows the track parameters and its covariance matrix.
2564 This parameterization is used to exchange tracks between the detectors.
2565 A set of functions returning the position and the momentum of tracks
2566 in the global coordinate system as well as the track impact parameters
2567 are implemented. There is possibility to propagate the track to a
2568 given radius \method{PropagateTo} and \method{Propagate}.
2570\item \class{AliKalmanTrack} and derived classes:
2571 These classes are used to find and fit tracks with the Kalman approach.
2572 The \class{AliKalmanTrack} defines the interfaces and implements some
2573 common functionality. The derived classes know about the clusters
2574 assigned to the track. They also update the information in an
2575 \class{AliESDtrack}.
2576 The current status of the track during the track reconstruction can be
2577 represented by an \class{AliExternalTrackParameters}.
2578 The history of the track during the track reconstruction can be stored
2579 in a list of \class{AliExternalTrackParameters} objects.
2580 The \class{AliKalmanTrack} defines the methods:
2581 \begin{itemize}
2582 \item \method{Double\_t GetDCA(...)} Returns the distance
2583 of closest approach between this track and the track passed as the
2584 argument.
2585 \item \method{Double\_t MeanMaterialBudget(...)} Calculate the mean
2586 material budget and material properties between two points.
2587 \end{itemize}
2589\item \class{AliTracker} and subclasses:
2590 The \class{AliTracker} is the base class for all the trackers in the
2591 different detectors. It fixes the interface needed to find and
2592 propagate tracks. The actual implementation is done in the derived classes.
2594\item \class{AliESDTrack}:
2595 This class combines the information about a track from different detectors.
2596 It knows the current status of the track
2597 (\class{AliExternalTrackParameters}) and it has (non-persistent) pointers
2598 to the individual \class{AliKalmanTrack} objects from each detector
2599 which contributed to the track.
2600 It knows about some detector specific quantities like the number or
2601 bit pattern of assigned clusters, dEdx, $\chi^2$, etc..
2602 And it can calculate a conditional probability for a given mixture of
2603 particle species following the Bayesian approach.
2604 It defines a track label pointing to the corresponding simulated
2605 particle in case of \MC.
2606 The combined track objects are the basis for a physics analysis.
2613The example below shows reconstruction with non-uniform magnetic field
2614(the simulation is also done with non-uniform magnetic field by adding
2615the following line in the Config.C: field$\to$SetL3ConstField(1)). Only
2616the barrel detectors are reconstructed, a specific TOF reconstruction
2617has been requested, and the RAW data have been used:
2620 void rec() {
2621 AliReconstruction reco;
2623 reco.SetRunReconstruction("ITS TPC TRD TOF");
2624 reco.SetNonuniformFieldTracking();
2625 reco.SetInput("raw.root");
2627 reco.Run();
2628 }
2631% -----------------------------------------------------------------------------
2633\subsection{Event summary data}\label{ESD}
2635The classes which are needed to process and analyze the ESD are packed
2636together in a standalone library ( which can be used
2637separately from the \aliroot framework. Inside each
2638ESD object the data is stored in polymorphic containers filled with
2639reconstructed tracks, neutral particles, etc. The main class is
2640\class{AliESD}, which contains all the information needed during the
2641physics analysis:
2644\item fields to identify the event such as event number, run number,
2645 time stamp, type of event, trigger type (mask), trigger cluster (mask),
2646 version of reconstruction, etc.;
2647\item reconstructed ZDC energies and number of participant;
2648\item primary vertex information: vertex z position estimated by the START,
2649 primary vertex estimated by the SPD, primary vertex estimated using
2650 ESD tracks;
2651\item SPD tracklet multiplicity;
2652\item interaction time estimated by the START together with additional
2653 time and amplitude information from START;
2654\item array of ESD tracks;
2655\item arrays of HLT tracks both from the conformal mapping and from
2656 the Hough transform reconstruction;
2657\item array of MUON tracks;
2658\item array of PMD tracks;
2659\item array of TRD ESD tracks (triggered);
2660\item arrays of reconstructed $V^0$ vertexes, cascade decays and
2661 kinks;
2662\item array of calorimeter clusters for PHOS/EMCAL;
2663\item indexes of the information from PHOS and EMCAL detectors in the
2664 array above.
2673% -----------------------------------------------------------------------------
2676The analysis of experimental data is the final stage of event
2677processing and it is usually repeated many times. Analysis is a very diverse
2678activity, where the goals of each
2679particular analysis pass may differ significantly.
2681The ALICE detector \cite{PPR} is optimized for the
2682reconstruction and analysis of heavy-ion collisions.
2683In addition, ALICE has a broad physics programme devoted to
2684\pp and \pA interactions.
2687The data analysis is coordinated by the Physics Board via the Physics
2688Working Groups (PWGs). At present the following PWG have started
2689their activity:
2692\item PWG0 \textbf{first physics};
2693\item PWG1 \textbf{detector performance};
2694\item PWG2 \textbf{global event characteristics:} particle multiplicity,
2695 centrality, energy density, nuclear stopping; \textbf{soft physics:} chemical composition (particle and resonance
2696 production, particle ratios and spectra, strangeness enhancement),
2697 reaction dynamics (transverse and elliptic flow, HBT correlations,
2698 event-by-event dynamical fluctuations);
2699\item PWG3 \textbf{heavy flavors:} quarkonia, open charm and beauty production.
2700\item PWG4 \textbf{hard probes:} jets, direct photons;
2703Each PWG has corresponding module in AliRoot (PWG0 -- PWG4). The code
2704is managed by CVS administrators.
2706The \pp and \pA programme will provide, on the one hand, reference points
2707for comparison with heavy ions. On the other hand, ALICE will also
2708pursue genuine and detailed \pp studies. Some
2709quantities, in particular the global characteristics of interactions, will
2710be measured during the first days of running exploiting the low-momentum
2711measurement and particle identification capabilities of ALICE.
2713The ALICE computing framework is described in details in the Computing
2714Technical Design Report \cite{CompTDR}. This article is based on
2715Chapter 6 of the document.
2718\paragraph{The analysis activity.}
2720We distinguish two main types of analysis: scheduled analysis and
2721chaotic analysis. They differ in their data access pattern, in the
2722storage and registration of the results, and in the frequency of
2723changes in the analysis code {more details are available below).
2725In the ALICE Computing Model the analysis starts from the Event Summary
2726Data (ESD). These are produced during the reconstruction step and contain
2727all the information for the analysis. The size of the ESD is
2728about one order of magnitude lower than the corresponding raw
2729data. The analysis tasks produce Analysis
2730Object Data (AOD) specific to a given set of physics objectives.
2731Further passes for the specific analysis activity can be performed on
2732the AODs, until the selection parameter or algorithms are changed.
2734A typical data analysis task usually requires processing of
2735selected sets of events. The selection is based on the event
2736topology and characteristics, and is done by querying the tag
2737database. The tags represent physics quantities which characterize
2738each run and event, and permit fast selection. They are created
2739after the reconstruction and contain also the unique
2740identifier of the ESD file. A typical query, when translated into
2741natural language, could look like ``Give me
2742all the events with impact parameter in $<$range$>$
2743containing jet candidates with energy larger than $<$threshold$>$''.
2744This results in a list of events and file identifiers to be used in the
2745consecutive event loop.
2748The next step of a typical analysis consists of a loop over all the events
2749in the list and calculation of the physics quantities of
2750interest. Usually, for each event, there is a set of embedded loops on the
2751reconstructed entities such as tracks, ${\rm V^0}$ candidates, neutral
2752clusters, etc., the main goal of which is to select the signal
2753candidates. Inside each loop a number of criteria (cuts) are applied to
2754reject the background combinations and to select the signal ones. The
2755cuts can be based on geometrical quantities such as impact parameters
2756of the tracks with
2757respect to the primary vertex, distance between the cluster and the
2758closest track, distance of closest approach between the tracks,
2759angle between the momentum vector of the particle combination
2760and the line connecting the production and decay vertexes. They can
2761also be based on
2762kinematics quantities such as momentum ratios, minimal and maximal
2763transverse momentum,
2764angles in the rest frame of the particle combination.
2765Particle identification criteria are also among the most common
2766selection criteria.
2768The optimization of the selection criteria is one of the most
2769important parts of the analysis. The goal is to maximize the
2770signal-to-background ratio in case of search tasks, or another
2771ratio (typically ${\rm Signal/\sqrt{Signal+Background}}$) in
2772case of measurement of a given property. Usually, this optimization is
2773performed using simulated events where the information from the
2774particle generator is available.
2776After the optimization of the selection criteria, one has to take into
2777account the combined acceptance of the detector. This is a complex,
2778analysis-specific quantity which depends on the geometrical acceptance,
2779the trigger efficiency, the decays of particles, the reconstruction
2780efficiency, the efficiency of the particle identification and of the
2781selection cuts. The components of the combined acceptance are usually
2782parameterized and their product is used to unfold the experimental
2783distributions or during the simulation of some model parameters.
2785The last part of the analysis usually involves quite complex
2786mathematical treatments, and sophisticated statistical tools. Here one
2787may include the correction for systematic effects, the estimation of
2788statistical and systematic errors, etc.
2791\paragraph{Scheduled analysis.}
2793The scheduled analysis typically uses all
2794the available data from a given period, and stores and registers the results
2795using \grid middleware. The tag database is updated accordingly. The
2796AOD files, generated during the scheduled
2797analysis, can be used by several subsequent analyses, or by a class of
2798related physics tasks.
2799The procedure of scheduled analysis is centralized and can be
2800considered as data filtering. The requirements come from the PWGs and
2801are prioritized by the Physics Board taking into
2802account the available computing and storage resources. The analysis
2803code is tested in advance and released before the beginning of the
2804data processing.
2806Each PWG will require some sets of
2807AOD per event, which are specific for one or
2808a few analysis tasks. The creation of the AOD sets is managed centrally.
2809The event list of each AOD set
2810will be registered and the access to the AOD files will be granted to
2811all ALICE collaborators. AOD files will be generated
2812at different computing centers and will be stored on
2813the corresponding storage
2814elements. The processing of each file set will thus be done in a
2815distributed way on the \grid. Some of the AOD sets may be quite small
2816and would fit on a single storage element or even on one computer; in
2817this case the corresponding tools for file replication, available
2818in the ALICE \grid infrastructure, will be used.
2821\paragraph{Chaotic analysis.}
2823The chaotic analysis is focused on a single physics task and
2824typically is based on the filtered data from the scheduled
2825analysis. Each physicist also
2826may access directly large parts of the ESD in order to search for rare
2827events or processes.
2828Usually the user develops the code using a small subsample
2829of data, and changes the algorithms and criteria frequently. The
2830analysis macros and software are tested many times on relatively
2831small data volumes, both experimental and \MC.
2832The output is often only a set of histograms.
2833Such a tuning of the analysis code can be done on a local
2834data set or on distributed data using \grid tools. The final version
2835of the analysis
2836will eventually be submitted to the \grid and will access large
2837portions or even
2838the totality of the ESDs. The results may be registered in the \grid file
2839catalog and used at later stages of the analysis.
2840This activity may or may not be coordinated inside
2841the PWGs, via the definition of priorities. The
2842chaotic analysis is carried on within the computing resources of the
2843physics groups.
2846% -----------------------------------------------------------------------------
2848\subsection{Infrastructure tools for distributed analysis}
2852The main infrastructure tools for distributed analysis have been
2853described in Chapter 3 of the Computing TDR\cite{CompTDR}. The actual
2854middleware is hidden by an interface to the \grid,
2855gShell\cite{CH6Ref:gShell}, which provides a
2856single working shell.
2857The gShell package contains all the commands a user may need for file
2858catalog queries, creation of sub-directories in the user space,
2859registration and removal of files, job submission and process
2860monitoring. The actual \grid middleware is completely transparent to
2861the user.
2863The gShell overcomes the scalability problem of direct client
2864connections to databases. All clients connect to the
2865gLite\cite{CH6Ref:gLite} API
2866services. This service is implemented as a pool of preforked server
2867daemons, which serve single-client requests. The client-server
2868protocol implements a client state which is represented by a current
2869working directory, a client session ID and time-dependent symmetric
2870cipher on both ends to guarantee client privacy and security. The
2871server daemons execute client calls with the identity of the connected
2874\subsubsection{PROOF -- the Parallel ROOT Facility}
2876The Parallel ROOT Facility, PROOF\cite{CH6Ref:PROOF} has been specially
2877designed and developed
2878to allow the analysis and mining of very large data sets, minimizing
2879response time. It makes use of the inherent parallelism in event data
2880and implements an architecture that optimizes I/O and CPU utilization
2881in heterogeneous clusters with distributed storage. The system
2882provides transparent and interactive access to terabyte-scale data
2883sets. Being part of the ROOT framework, PROOF inherits the benefits of
2884a performing object storage system and a wealth of statistical and
2885visualization tools.
2886The most important design features of PROOF are:
2889\item transparency -- no difference between a local ROOT and
2890 a remote parallel PROOF session;
2891\item scalability -- no implicit limitations on number of computers
2892 used in parallel;
2893\item adaptability -- the system is able to adapt to variations in the
2894 remote environment.
2897PROOF is based on a multi-tier architecture: the ROOT client session,
2898the PROOF master server, optionally a number of PROOF sub-master
2899servers, and the PROOF worker servers. The user connects from the ROOT
2900session to a master server on a remote cluster and the master server
2901creates sub-masters and worker servers on all the nodes in the
2902cluster. All workers process queries in parallel and the results are
2903presented to the user as coming from a single server.
2905PROOF can be run either in a purely interactive way, with the user
2906remaining connected to the master and worker servers and the analysis
2907results being returned to the user's ROOT session for further
2908analysis, or in an `interactive batch' way where the user disconnects
2909from the master and workers (see Fig.~\vref{CH3Fig:alienfig7}). By
2910reconnecting later to the master server the user can retrieve the
2911analysis results for that particular
2912query. This last mode is useful for relatively long running queries
2913(several hours) or for submitting many queries at the same time. Both
2914modes will be important for the analysis of ALICE data.
2917 \centering
2918 \includegraphics[width=11.5cm]{picts/alienfig7}
2919 \caption{Setup and interaction with the \grid middleware of a user
2920 PROOF session distributed over many computing centers.}
2921 \label{CH3Fig:alienfig7}
2924% -----------------------------------------------------------------------------
2926\subsection{Analysis tools}
2928This section is devoted to the existing analysis tools in \ROOT and
2929\aliroot. As discussed in the introduction, some very broad
2930analysis tasks include the search for some rare events (in this case the
2931physicist tries to maximize the signal-over-background ratio), or
2932measurements where it is important to maximize the signal
2933significance. The tools that provide possibilities to apply certain
2934selection criteria and to find the interesting combinations within
2935a given event are described below. Some of them are very general and are
2936used in many different places, for example the statistical
2937tools. Others are specific to a given analysis.
2939\subsubsection{Statistical tools}
2941Several commonly used statistical tools are available in
2942\ROOT\cite{ROOT}. \ROOT provides
2943classes for efficient data storage and access, such as trees
2944and ntuples. The
2945ESD information is organized in a tree, where each event is a separate
2946entry. This allows a chain of the ESD files to be made and the
2947elaborated selector mechanisms to be used in order to exploit the PROOF
2948services. The tree classes
2949permit easy navigation, selection, browsing, and visualization of the
2950data in the branches.
2952\ROOT also provides histogramming and fitting classes, which are used
2953for the representation of all the one- and multi-dimensional
2954distributions, and for extraction of their fitted parameters. \ROOT provides
2955an interface to powerful and robust minimization packages, which can be
2956used directly during some special parts of the analysis. A special
2957fitting class allows one to decompose an experimental histogram as a
2958superposition of source histograms.
2960\ROOT also has a set of sophisticated statistical analysis tools such as
2961principal component analysis, robust estimator, and neural networks.
2962The calculation of confidence levels is provided as well.
2964Additional statistical functions are included in \texttt{TMath}.
2966\subsubsection{Calculations of kinematics variables}
2968The main \ROOT physics classes include 3-vectors and Lorentz
2969vectors, and operations
2970such as translation, rotation, and boost. The calculations of
2971kinematics variables
2972such as transverse and longitudinal momentum, rapidity,
2973pseudorapidity, effective mass, and many others are provided as well.
2976\subsubsection{Geometrical calculations}
2978There are several classes which can be used for
2979measurement of the primary vertex: \texttt{AliITSVertexerZ},
2980\texttt{AliITSVertexerIons}, \texttt{AliITSVertexerTracks}, etc. A fast estimation of the {\it z}-position can be
2981done by \texttt{AliITSVertexerZ}, which works for both lead--lead
2982and proton--proton collisions. An universal tool is provided by
2983\texttt{AliITSVertexerTracks}, which calculates the position and
2984covariance matrix of the primary vertex based on a set of tracks, and
2985also estimates the $\chi^2$ contribution of each track. An iterative
2986procedure can be used to remove the secondary tracks and improve the
2989Track propagation to the primary vertex (inward) is provided in
2992The secondary vertex reconstruction in case of ${\rm V^0}$ is provided by
2993\texttt{AliV0vertexer}, and in case of cascade hyperons by
2994\texttt{AliCascadeVertexer}. An universal tool is
2995\texttt{AliITSVertexerTracks}, which can be used also to find secondary
2996vertexes close to the primary one, for example decays of open charm
2997like ${\rm D^0 \to K^- \pi^+}$ or ${\rm D^+ \to K^- \pi^+ \pi^+}$. All
2998the vertex
2999reconstruction classes also calculate distance of closest approach (DCA)
3000between the track and the vertex.
3002The calculation of impact parameters with respect to the primary vertex
3003is done during the reconstruction and the information is available in
3004\texttt{AliESDtrack}. It is then possible to recalculate the
3005impact parameter during the ESD analysis, after an improved determination
3006of the primary vertex position using reconstructed ESD tracks.
3008\subsubsection{Global event characteristics}
3010The impact parameter of the interaction and the number of participants
3011are estimated from the energy measurements in the ZDC. In addition,
3012the information from the FMD, PMD, and T0 detectors is available. It
3013gives a valuable estimate of the event multiplicity at high rapidities
3014and permits global event characterization. Together with the ZDC
3015information it improves the determination of the impact parameter,
3016number of participants, and number of binary collisions.
3018The event plane orientation is calculated by the \texttt{AliFlowAnalysis} class.
3020\subsubsection{Comparison between reconstructed and simulated parameters}
3022The comparison between the reconstructed and simulated parameters is
3023an important part of the analysis. It is the only way to estimate the
3024precision of the reconstruction. Several example macros exist in
3025\aliroot and can be used for this purpose: \texttt{AliTPCComparison.C},
3026\texttt{AliITSComparisonV2.C}, etc. As a first step in each of these
3027macros the list of so-called `good tracks' is built. The definition of
3028a good track is explained in detail in the ITS\cite{CH6Ref:ITS_TDR} and
3029TPC\cite{CH6Ref:TPC_TDR} Technical Design
3030Reports. The essential point is that the track
3031goes through the detector and can be reconstructed. Using the `good
3032tracks' one then estimates the efficiency of the reconstruction and
3033the resolution.
3035Another example is specific to the MUON arm: the \texttt{MUONRecoCheck.C}
3036macro compares the reconstructed muon tracks with the simulated ones.
3038There is also the possibility to calculate directly the resolutions without
3039additional requirements on the initial track. One can use the
3040so-called track label and retrieve the corresponding simulated
3041particle directly from the particle stack (\texttt{AliStack}).
3043\subsubsection{Event mixing}
3045One particular analysis approach in heavy-ion physics is the
3046estimation of the combinatorial background using event mixing. Part of the
3047information (for example the positive tracks) is taken from one
3048event, another part (for example the negative tracks) is taken from
3049a different, but
3050`similar' event. The event `similarity' is very important, because
3051only in this case the combinations produced from different events
3052represent the combinatorial background. Typically `similar' in
3053the example above means with the same multiplicity of negative
3054tracks. One may require in addition similar impact parameters of the
3055interactions, rotation of the tracks of the second event to adjust the
3056event plane, etc. The possibility for event mixing is provided in
3057\aliroot by the fact that the ESD is stored in trees and one can chain
3058and access simultaneously many ESD objects. Then the first pass would
3059be to order the events according to the desired criterion of
3060`similarity' and to use the obtained index for accessing the `similar'
3061events in the embedded analysis loops. An example of event mixing is
3062shown in Fig.~\ref{CH6Fig:phipp}. The background distribution has been
3063obtained using `mixed events'. The signal distribution has been taken
3064directly from the \MC simulation. The `experimental distribution' has
3065been produced by the analysis macro and decomposed as a
3066superposition of the signal and background histograms.
3069 \centering
3070 \includegraphics*[width=120mm]{picts/phipp}
3071 \caption{Mass spectrum of the ${\rm \phi}$ meson candidates produced
3072 inclusively in the proton--proton interactions.}
3073 \label{CH6Fig:phipp}
3077\subsubsection{Analysis of the High-Level Trigger (HLT) data}
3079This is a specific analysis which is needed in order to adjust the cuts
3080in the HLT code, or to estimate the HLT
3081efficiency and resolution. \aliroot provides a transparent way of doing
3082such an analysis, since the HLT information is stored in the form of ESD
3083objects in a parallel tree. This also helps in the monitoring and
3084visualization of the results of the HLT algorithms.
3088\subsubsection{EVE -- Event Visualization Environment}
3090EVE is composed of:
3092\item small application kernel;
3093\item graphics classes with editors and OpenGL renderers;
3094\item CINT scripts that extract data, fill graphics classes and register
3095 them to the application.
3098The framework is still evolving ... some things might not work as expected.
3103\item Initialize ALICE environment.
3104\item Spawn 'alieve' executable and invoke the alieve\_init.C macro,
3105 for example:
3107To load first event from current directory:
3109 # alieve alieve\_init.C
3111To load 5th event from directory /data/my-pp-run:
3113 # alieve 'alieve\_init.C("/data/my-pp-run", 5)'
3117 # alieve
3118 root[0] .L alieve\_init.C
3119 root[1] alieve\_init("/somedir")
3122\item Use GUI or CINT command-line to invoke further visualization macros.
3123\item To navigate the events use macros 'event\_next.C' and 'event\_prev.C'.
3124 These are equivalent to the command-line invocations:
3126 root[x] Alieve::gEvent->NextEvent()
3130 root[x] Alieve::gEvent->PrevEvent()
3132The general form to go to event via its number is:
3134 root[x] Alieve::gEvent->GotoEvent(<event-number>)
3138See files in EVE/alice-macros/. For specific uses these should be
3139edited to suit your needs.
3141\underline{Directory structure}
3143EVE is split into two modules: REVE (ROOT part, not dependent on
3144AliROOT) and ALIEVE (ALICE specific part). For the time being both
3145modules are kept in AliROOT CVS.
3147Alieve/ and Reve/ -- sources
3149macros/ -- macros for bootstraping and internal steering\\
3150alice-macros/ -- macros for ALICE visualization\\
3151alica-data/ -- data files used by ALICE macros\\
3152test-macros/ -- macros for tests of specific features; usually one needs
3153 to copy and edit them\\
3154bin/, Makefile and make\ are used for stand-alone build of the
3160\item Problems with macro-execution
3162A failed macro-execution can leave CINT in a poorly defined state that
3163prevents further execution of macros. For example:
3166 Exception Reve::Exc_t: Event::Open failed opening ALICE ESDfriend from
3167 '/alice-data/coctail_10k/AliESDfriends.root'.
3169 root [1] Error: Function MUON_geom() is not defined in current scope :0:
3170 *** Interpreter error recovered ***
3171 Error: G__unloadfile() File "/tmp/MUON_geom.C" not loaded :0:
3174'gROOT$\to$Reset()' helps in most of the cases.
3177% ------------------------------------------------------------------------------
3180\subsection{Existing analysis examples in \aliroot}
3182There are several dedicated analysis tools available in \aliroot. Their results
3183were used in the Physics Performance Report and described in
3184ALICE internal notes. There are two main classes of analysis: the
3185first one based directly on ESD, and the second one extracting first
3186AOD, and then analyzing it.
3189\item\textbf{ESD analysis }
3191 \begin{itemize}
3192 \item[ ] \textbf{${\rm V^0}$ and cascade reconstruction/analysis}
3194 The ${\rm V^0}$ candidates
3195 are reconstructed during the combined barrel tracking and stored in
3196 the ESD object. The following criteria are used for the selection:
3197 minimal-allowed impact parameter (in the transverse plane) for each
3198 track; maximal-allowed DCA between the two tracks; maximal-allowed
3199 cosine of the
3200 ${\rm V^0}$ pointing angle
3201 (angle between the momentum vector of the particle combination
3202 and the line connecting the production and decay vertexes); minimal
3203 and maximal radius of the fiducial volume; maximal-allowed ${\rm
3204 \chi^2}$. The
3205 last criterion requires the covariance matrix of track parameters,
3206 which is available only in \texttt{AliESDtrack}. The reconstruction
3207 is performed by \texttt{AliV0vertexer}. This class can be used also
3208 in the analysis. An example of reconstructed kaons taken directly
3209 from the ESDs is shown in Fig.\ref{CH6Fig:kaon}.
3211 \begin{figure}[th]
3212 \centering
3213 \includegraphics*[width=120mm]{picts/kaon}
3214 \caption{Mass spectrum of the ${\rm K_S^0}$ meson candidates produced
3215 inclusively in the \mbox{Pb--Pb} collisions.}
3216 \label{CH6Fig:kaon}
3217 \end{figure}
3219 The cascade hyperons are reconstructed using the ${\rm V^0}$ candidate and
3220 `bachelor' track selected according to the cuts above. In addition,
3221 one requires that the reconstructed ${\rm V^0}$ effective mass belongs to
3222 a certain interval centered in the true value. The reconstruction
3223 is performed by \texttt{AliCascadeVertexer}, and this class can be
3224 used in the analysis.
3226 \item[ ] \textbf{Open charm}
3228 This is the second elaborated example of ESD
3229 analysis. There are two classes, \texttt{AliD0toKpi} and
3230 \texttt{AliD0toKpiAnalysis}, which contain the corresponding analysis
3231 code. The decay under investigation is ${\rm D^0 \to K^- \pi^+}$ and its
3232 charge conjugate. Each ${\rm D^0}$ candidate is formed by a positive and
3233 a negative track, selected to fulfill the following requirements:
3234 minimal-allowed track transverse momentum, minimal-allowed track
3235 impact parameter in the transverse plane with respect to the primary
3236 vertex. The selection criteria for each combination include
3237 maximal-allowed distance of closest approach between the two tracks,
3238 decay angle of the kaon in the ${\rm D^0}$ rest frame in a given region,
3239 product of the impact parameters of the two tracks larger than a given value,
3240 pointing angle between the ${\rm D^0}$ momentum and flight-line smaller than
3241 a given value. The particle
3242 identification probabilities are used to reject the wrong
3243 combinations, namely ${\rm (K,K)}$ and ${\rm (\pi,\pi)}$, and to enhance the
3244 signal-to-background ratio at low momentum by requiring the kaon
3245 identification. All proton-tagged tracks are excluded before the
3246 analysis loop on track pairs. More details can be found in
3247 Ref.\cite{CH6Ref:Dainese}.
3249 \item[ ] \textbf{Quarkonia analysis}
3251 Muon tracks stored in the ESD can be analyzed for example by the macro
3252 \texttt{MUONmassPlot\_ESD.C}.
3253 This macro performs an invariant-mass analysis of muon unlike-sign pairs
3254 and calculates the combinatorial background.
3255 Quarkonia \pt and rapidity distribution are built for \Jpsi and \Ups.
3256 This macro also performs a fast single-muon analysis: \pt,
3257 rapidity, and
3258 ${\rm \theta}$ vs ${\rm \varphi}$ acceptance distributions for positive
3259 and negative muon
3260 tracks with a maximal-allowed ${\rm \chi^2}$.
3262 \end{itemize}
3264 % \newpage
3265\item\textbf{AOD analysis}
3267 Often only a small subset of information contained in the ESD
3268 is needed to perform an analysis. This information
3269 can be extracted and stored in the AOD format in order to reduce
3270 the computing resources needed for the analysis.
3272 The AOD analysis framework implements a set of tools like data readers,
3273 converters, cuts, and other utility classes.
3274 The design is based on two main requirements: flexibility and common
3275 AOD particle interface. This guarantees that several analyses can be
3276 done in sequence within the same computing session.
3278 In order to fulfill the first requirement, the analysis is driven by the
3279 `analysis manager' class and particular analyses are added to it.
3280 It performs the loop over events, which are delivered by an
3281 user-specified reader. This design allows the analyses to be ordered
3282 appropriately if some of them depend on the results of the others.
3284 The cuts are designed to provide high flexibility
3285 and performance. A two-level architecture has been adopted
3286 for all the cuts (particle, pair and event). A class representing a cut
3287 has a list of `base cuts'. Each base cut implements a cut on a
3288 single property or performs a logical operation (and, or) on the result of
3289 other base cuts.
3291 A class representing a pair of particles buffers all the results,
3292 so they can be re-used if required.
3294 \vspace{-0.2cm}
3295 \begin{itemize}
3296 \item[ ] \textbf{Particle momentum correlations (HBT) -- HBTAN module}
3298 Particle momentum correlation analysis is based on the event-mixing technique.
3299 It allows one to extract the signal by dividing the appropriate
3300 particle spectra coming from the original events by those from the
3301 mixed events.
3303 Two analysis objects are currently implemented to perform the mixing:
3304 the standard one and the one implementing the Stavinsky
3305 algorithm\cite{CH6Ref:Stavinsky}. Others can easily be added if needed.
3307 An extensive hierarchy of the function base classes has been implemented
3308 facilitating the creation of new functions.
3309 A wide set of the correlation, distribution and monitoring
3310 functions is already available in the module. See Ref.\cite{CH6Ref:HBTAN}
3311 for the details.
3313 The package contains two implementations of weighting algorithms, used
3314 for correlation simulations (the first developed by Lednicky
3315 \cite{CH6Ref:Weights} and the second due to CRAB \cite{CH6Ref:CRAB}), both
3316 based on an uniform interface.
3318 \item[ ] \textbf{Jet analysis}
3320 The jet analysis\cite{CH6Ref:Loizides} is available in the module JETAN. It has a set of
3321 readers of the form \texttt{AliJetParticlesReader<XXX>}, where \texttt{XXX}
3322 = \texttt{ESD},
3323 \texttt{HLT}, \texttt{KineGoodTPC}, \texttt{Kine}, derived from the base class
3324 \texttt{AliJetParticlesReader}. These
3325 provide an uniform interface to
3326 the information from the
3327 kinematics tree, from HLT, and from the ESD. The first step in the
3328 analysis is the creation of an AOD object: a tree containing objects of
3329 type \texttt{AliJetEventParticles}. The particles are selected using a
3330 cut on the minimal-allowed transverse momentum. The second analysis
3331 step consists of jet finding. Several algorithms are available in the
3332 classes of the type \texttt{Ali<XXX>JetFinder}.
3333 An example of AOD creation is provided in
3334 the \texttt{createEvents.C} macro. The usage of jet finders is illustrated in
3335 \texttt{findJets.C} macro.
3338 \item[ ] \textbf{${\rm V^0}$ AODs}
3340 The AODs for ${\rm V^0}$ analysis contain several additional parameters,
3341 calculated and stored for fast access. The methods of the class {\tt
3342 AliAODv0} provide access to all the geometrical and kinematics
3343 parameters of a ${\rm V^0}$ candidate, and to the ESD information used
3344 for the calculations.
3346 \vspace{-0.1cm}
3347 \item[ ] \textbf{MUON}
3349 There is also a prototype MUON analysis provided in
3350 \texttt{AliMuonAnalysis}. It simply fills several histograms, namely
3351 the transverse momentum and rapidity for positive and negative muons,
3352 the invariant mass of the muon pair, etc.
3353 \end{itemize}
3361\section{Analysis Foundation Library}
3363The result of the reconstruction chain is the Event Summary Data (ESD)
3364object. It contains all the information that may
3365be useful in {\it any} analysis. In most cases only a small subset
3366of this information is needed for a given analysis.
3367Hence, it is essential to provide a framework for analyses, where
3368user can extract only the information required and store it in
3369the Analysis Object Data (AOD) format. This is to be used in all his
3370further analyses. The proper data preselecting allows to speed up
3371the computation time significantly. Moreover, the interface of the ESD classes is
3372designed to fulfill the requirements of the reconstruction
3373code. It is inconvenient for most of analysis algorithms,
3374in contrary to the AOD one. Additionally, the latter one can be customized
3375to the needs of particular analysis, if it is only required.
3377We have developed the analysis foundation library that
3378provides a skeleton framework for analyses, defines AOD data format
3379and implements a wide set of basic utility classes which facilitate
3380the creation of individual analyses.
3381It contains classes that define the following entities:
3384\item AOD event format
3385\item Event buffer
3386\item Particle(s)
3387\item Pair
3388\item Analysis manager class
3389\item Base class for analyses
3390\item Readers
3391\item AOD writer
3392\item Particle cuts
3393\item Pair cuts
3394\item Event cuts
3395\item Other utility classes
3398It is designed to fulfill two main requirements:
3401\item \textbf{Allows for flexibility in designing individual analyses}
3402 Each analysis has its most performing solutions. The most trivial example is
3403 the internal representation of a particle momentum: in some cases the Cartesian coordinate system is preferable and in other cases - the cylindrical one.
3404\item \textbf{All analyses use the same AOD particle interface to access the data }
3405 This guarantees that analyses can be chained. It is important when
3406 one analysis depends on the result of the other one, so the latter one can
3407 process exactly the same data without the necessity of any conversion.
3408 It also lets to carry out many analyses in the same job and consequently, the
3409 computation time connected with
3410 the data reading, job submission, etc. can be significantly reduced.
3412% ..
3413The design of the framework is described in detail below.
3416% -----------------------------------------------------------------------------
3420The \texttt{AliAOD} class contains only the information required
3421for an analysis. It is not only the data format as they are
3422stored in files, but it is also used internally throughout the package
3423as a particle container.
3424Currently it contains a \texttt{TClonesArray} of particles and
3425data members describing the global event properties.
3426This class is expected to evolve further as new analyses continue to be
3427developed and their requirements are implemented.
3429% -----------------------------------------------------------------------------
3433\texttt{AliVAODParticle} is a pure virtual class that defines a particle
3435Each analysis is allowed to create its own particle class
3436if none of the already existing ones meet its requirements.
3437Of course, it must derive from \texttt{AliVAODParticle}.
3438However, all analyses are obliged to
3439use the interface defined in \texttt{AliVAODParticle} exclusively.
3440If additional functionality is required, an appropriate
3441method is also added to the virtual interface (as a pure virtual or an empty one).
3442Hence, all other analyses can be ran on any AOD, although the processing time
3443might be longer in some cases (if the internal representation is not
3444the optimal one).
3446We have implemented the standard concrete particle class
3447called \texttt{AliAODParticle}. The momentum is stored in the
3448Cartesian coordinates and it also has the data members
3449describing the production vertex. All the PID information
3450is stored in two dynamic arrays. The first array contains
3451probabilities sorted in descending order,
3452and the second one - corresponding PDG codes (Particle Data Group).
3453The PID of a particle is defined by the data member which is
3454the index in the arrays. This solution allows for faster information
3455access during analysis and minimizes memory and disk space consumption.
3458% -----------------------------------------------------------------------------
3462The pair object points to two particles and implements
3463a set of methods for the calculation of the pair properties.
3464It buffers calculated values and intermediate
3465results for performance reasons. This solution applies to
3466quantities whose computation is time consuming and
3467also to quantities with a high reuse probability. A
3468Boolean flag is used to mark the variables already calculated.
3469To ensure that this mechanism works properly,
3470the pair always uses its own methods internally,
3471instead of accessing its variables directly.
3473The pair object has pointer to another pair with the swapped
3474particles. The existence of this feature is connected to
3475the implementation of the mixing algorithm in the correlation
3476analysis package: if particle A is combined with B,
3477the pair with the swapped particles is not mixed.
3478In non-identical particle analysis their order is important, and
3479a pair cut may reject a pair while a reversed one would be
3480accepted. Hence, in the analysis the swapped pair is also tried
3481if a regular one is rejected. In this way the buffering feature is
3482automatically used also for the swapped pair.
3484% -----------------------------------------------------------------------------
3486\subsection{Analysis manager class and base class}
3488The {\it analysis manager} class (\texttt{AliRunAnalysis}) drives all
3489the process. A particular analysis, which must inherit from
3490\texttt{AliAnalysis} class, is added to it.
3491The user triggers analysis by calling the \texttt{Process} method.
3492The manager performs a loop over events, which are delivered by
3493a reader (derivative of the \texttt{AliReader} class, see section
3495This design allows to chain the analyses in the proper order if any
3496depends on the results of the other one.
3498The user can set an event cut in the manager class.
3499If an event is not rejected, the \texttt{ProcessEvent}
3500method is executed for each analysis object.
3501This method requires two parameters, namely pointers to
3502a reconstructed and a simulated event.
3504The events have a parallel structure, i.e. the corresponding
3505reconstructed particles and simulated particles have always the same index.
3506This allows for easy implementation of an analysis where both
3507are required, e.g. when constructing residual distributions.
3508It is also very important in correlation simulations
3509that use the weight algorithm\cite{CH6Ref:Weights}.
3510By default, the pointer to the simulated event is null,
3511i.e. like it is in the experimental data processing.
3513An event cut and a pair cut can be set in \texttt{AliAnalysis}.
3514The latter one points two particle cuts, so
3515an additional particle cut data member is redundant
3516because the user can set it in this pair cut.
3518\texttt{AliAnalysis} class has the feature that allows to choose
3519which data the cuts check:
3521\item the reconstructed (default)
3522\item the simulated
3523\item both.
3526It has four pointers to the method (data members):
3528\item \texttt{fkPass1} -- checks a particle, the cut is defined by the
3529 cut on the first particle in the pair cut data member
3530\item \texttt{fkPass2} -- as above, but the cut on the second particle is used
3531\item \texttt{fkPass} -- checks a pair
3532\item \texttt{fkPassPairProp} -- checks a pair, but only two particle properties
3533 are considered
3535Each of them has two parameters, namely pointers to
3536reconstructed and simulated particles or pairs.
3537The user switches the behavior with the
3538method that sets the above pointers to the appropriate methods.
3539We have decided to implement
3540this solution because it performs faster than the simpler one that uses
3541boolean flags and "if" statements. These cuts are used mostly inside
3542multiply nested loops, and even a small performance gain transforms
3543into a noticeable reduction of the overall computation time.
3544In the case of an event cut, the simpler solution was applied.
3545The \texttt{Rejected} method is always used to check events.
3546A developer of the analysis code must always use this method and
3547the pointers to methods itemized above to benefit from this feature.
3549% -----------------------------------------------------------------------------
3554A Reader is the object that provides data far an analysis.
3555\texttt{AliReader} is the base class that defines a pure virtual
3558A reader may stream the reconstructed and/or the
3559simulated data. Each of them is stored in a separate AOD.
3560If it reads both, a corresponding reconstructed and
3561simulated particle have always the same index.
3563Most important methods for the user are the following:
3565\item \texttt{Next} -- It triggers reading of a next event. It returns
3566 0 in case of success and 1 if no more events
3567 are available.
3568\item \texttt{Rewind} -- Rewinds reading to the beginning
3569\item \texttt{GetEventRec} and \texttt{GetEventSim} -- They return
3570 pointers to the reconstructed and the simulated events respectively.
3573The base reader class implements functionality for
3574particle filtering at a reading level. A user can set any
3575number of particle cuts in a reader and the particle is
3576read if it fulfills the criteria defined by any of them.
3577Particularly, a particle type is never certain and the readers
3578are constructed in the way that all the PID hypotheses (with non-zero
3579probability) are verified.
3580In principle, a track can be read with more than one mass
3582For example, consider a track
3583which in 55\% is a pion and in 40\% a kaon, and a user wants to read
3584all the pions and kaons with the PID probabilities higher then
358550\% and 30\%, respectively. In such cases two particles
3586with different PIDs are added to AOD.
3587However, both particle have the same Unique Identification
3588number (UID) so it can be easily checked that in fact they are
3589the same track.
3591% Multiple File Sources
3592\texttt{AliReader} implements the feature that allows to specify and manipulate
3593multiple data sources, which are read sequentially.
3594The user can provide a list of directory names where the data are searched.
3595The \texttt{ReadEventsFromTo} method allows to limit the range of events that are read
3596(e.g. when only one event of hundreds stored in an AOD is of interest).
3597% Event Buffering
3598\texttt{AliReader} has the switch that enables event buffering,
3599so an event is not deleted and can be quickly accessed if requested again.
3601% Blending
3602Particles within an event are frequently sorted in some way, e.g.
3603the particle trajectory reconstruction provides tracks sorted according
3604to their transverse momentum. This leads to asymmetric
3605distributions where they are not expected. The user can request the
3606reader to randomize the particle order with \texttt{SetBlend} method.
3608% Writing AOD
3609The AOD objects can be written to disk with the \texttt{AliReaderAOD}
3610using the static method \texttt{WriteAOD}. As the first
3611parameter user must pass the pointer to another reader that
3612provides AOD objects. Typically it is \texttt{AliReaderESD},
3613but it also can be other one, f.g. another \texttt{AliReaderAOD}
3614(to filter out the desired particles from the already existing AODs).