reviewing typos, re-writting parts -- Cath
[u/mrichter/AliRoot.git] / EMCAL / doc / strategy.tex
02582a78 1
2\section{Reconstruction strategy \label{sec:strategy}}
f8cd2d81 6This section describes how different correction factors are obtained: the energy calibration (MIP, $\pi^{0}$, run by run), the time calibration and the bad channel mask.\
02582a78 7
f8cd2d81 8All these correction factors or masks are stored in the OCDB but also the OADB. Since these calibration parameters do not arrive before the full ALICE data reconstructions of the first periods are completed, the parameters are stored not only in the OCDB but also in the OADB so that the clusters can be corrected at the analysis level. For the moment we do not store the time calibration and run by run correction factors in OCDB just in OADB.
02582a78 9
10\subsubsection{Energy calibration: MIP calibration before installation - Julien}
f8cd2d81 11First, the calibration is done on cosmic measurements before installing the SuperModules at P2, but the accuracy obtained using MIPs is not good enough.
02582a78 13\subsubsection{Energy calibration: $\pi^{0}$ - Catherine}
f8cd2d81 15The energy calibration relies during data taking on the measurement of the $\pi^{0}$ mass position per cell. Each tower has a calibration coefficient. In what follows, a calibration parameter is equal to the result of the fitted mass over the PDG mass value, where the fitted mass denotes the mass given by a gaussian fit on the $\pi^{0}$ invariant mass peak distribution in a given tower (plus a combinatorial background, fitted by a 2nd degree polynomial).
16About 100-200 M events EMCAL (L0) triggered (trigger threshold at 1.5-2 GeV) allow to calibrate a majority of the towers. The towers located on rows 0 and 23 of each super modul (SM ) and those behind the support frame (about 5 columns per SM) have much fewer statistics and would need a minimum of 150 Mevts (probably more). It is to be noted that the run-to-run temperature variations change the towers' response in a non-uniform way, i.e. the width of the $\pi^{0}$ peak increases, and the mean $\pi^{0}$ mass is shifted differently for the various towers. Also the $\pi^{0}$ mass shifts to lower values for the towers with material in front, due to photoconversion close to the EMCAL surface.
18A few iterations on the data, obtaining in each iteration improved calibration coefficients, are needed to achieve a good accuracy (1-2\%). Since the online calibration has a strong effect on the trigger efficiency, the voltage gains of the APDs are varied after each running period, to get a uniform trigger performance. Still, some towers are difficult to calibrate because they are behind of a lot of material (TRD support structures). For those MIPs or $J/\Psi$ measurements could help.
1f1f78f0 20\paragraph*{$\pi^{0}$ Calibration Procedure\\}
f8cd2d81 21
22Since $\pi^{0}$s decay into 2 gammas, their invariant mass is calculated from the energy of 2 clusters (and angle between the clusters). The position of the invariant mass peak of a tower therefore doesn't depend only on its response and calibration coefficient, but also on an average of the responses and calibration coefficients of all the other towers of the SM, weighted by how often they appear in combination with a cluster in the considered tower. The 2nd effect, of weaker magnitude maybe, originates from the fact that a cluster most often covers more than the considered tower. To simplify the calibration process, the calibration coefficient is calculated as if the whole energy of the cluster was contained in the tower of the cluster which has the largest signal. So the position of the invariant mass peak of a tower also depends on an average of the responses and calib coeffs of its neighbouring towers. For these reasons, the calibration of the calorimeter with the $\pi^{0}$ is an iterative procedure :
24\item Set all calib coeffs to 0 in OCDB.
25\item Reconstruct the $\pi^{0}$'s with these OCDB coeffs.
26\item Run the analysis code on this data to produce the analysis histograms and a 1st version of the calib coeffs.
27\item Look at the fits on the towers invariant mass histograms and discard the value (or set it by hand) of the calib coeff of the towers for which the fit can't be trusted.
28\item Create a 1st set of OCDB coeffs.
29\item Reconstruct the $\pi^{0}$'s with these OCDB coeffs.
30\item Run the analysis code on this data to produce the analysis histograms and a 2nd version of the calib coeffs.
31\item Look at the fits on the towers invariant mass histograms and discard the value (or set it by hand) of the calib coeff of the towers for which the fit can't be trusted.
32\item Create a 2nd set of OCDB coeffs.
33\item Etc..., until the invariant mass is satisfactory in all the towers.
35When the statistics is enough, 4 iterations should be enough to finalize the calibration (in practice, more are needed, due to outliers or studies that are needed).
37There are 3 sets of codes :
39\item Reco code : reads the data, reconstructs the $\pi^{0}$ inv mass distrib in each tower after it applies some cuts on the clusters and $\pi^{0}$ parameters. The output is a root file with invariant mass histograms (per tower, and summed-up per SM, per pT-bin).
40\item Analysis code : reads the file produced by the reco code and analyses the histograms to produce the calib coeffs. This code is the one I present in what follows.
41\item A code which reads the calib coeffs and writes them into a format that is loadable to OCDB.
43The code is located in EMCAL/calibPi0/ :
45\item macros/ : contains the various macros.
46\item input/ : contains the root files produced by the analysis code for the various iterations ("passes"). It has subdirectories "pass0/", "pass1/", etc... with, in each dir, the root file.
47\item output/ : contains the various files produced by the analysis code for the various passes. It has subdirectories "pass0/", "pass1/", etc... with, in each dir, the various output files related to the pass.\footnote{Note that it wouldn't necessarily help to set-up a code that automatically reads and writes the pass number to avoid the hardcoded directories in the code, because it happens to do several times the same pass with various parameters (e.g. cuts in the reconstruction, or more statistics, or various masked zones, or hand-customization of a few calib coeffs, etc...).}
50The cuts which must be put in the reconstruction are :
52\item Bad towers masked.
53\item Both clusters in the same SM (to avoid misalignment effects).
54\item Cut the 1-tower clusters out.
55\item 20~ns timing cut.
56\item Non-linearity correction (for the cluster energy)-- from beam test AFAIK.
57\item No asymetry cut.
58\item $E_{cluster} > 0.8$ ~GeV, or 0.7 GeV if there is little statistics. Tests showed that to remove the residual non-linearity (the $pi_{0}$ invariant mass rises with $p_{T}$), tightening the cut on $E_{cluster}$ was more efficient than requiring symetric decays (both gamma's of similar energy) (e.g. $asym < 0.5$ with $E_{gamma} > 0.5$~GeV).
60It has the possibility to mask some areas. This is useful to disentangle the zones which have more material in front of them from those which don't. In the invariant mass distributions, the $\pi^{0}$ candidates kept are only those for which both clusters belong to the non-masked zones. In 2011, we considered masking the zones behind the support frame (in all the SMs or only in the SMs with TRD modules in front of them, i.e. SM 6-9 that year), plus additionnal problematic zones, to avoid taking clusters in these zones for the calculation of the average invariant mass in the towers with less material. (NB : not used for final calibration results, but for studies).
62The analysis code has 3 input files :
64\item the root file f05 with inv mass histograms produced by the reconstruction code,
65\item a file txtFileIn (output\_calibPi0\_parameters.txt) that contains the values of various parameters of the fit for each tower, at the previous pass,
66\item a file txtFilePrevCalib (output\_calibPi0\_coeffs\_clean.txt) that contains the value of the calibration coefficient for each tower, at the previous pass (and after the hand-made corrections).
68The 2 last files are therefore useless for the "pass0". To run the code for "pass0" (1st iteration), put the name of a valid file (e.g. one of last year) and just ignore the plots (red colour, in the last section -- see below).
70There are 4 output files, that are written in the current directory (calibPi0/) : be careful not to overwrite an existing file ! After the code has been run, simply move those files to the relevant passXX directory := output/passXX/=.
72\item a postscript file psfile (output\ with the plots described below,
73\item a root file rootFileOut (output\_calibPi0.root) that contains the same plots in root format,
74\item a file txtFileOut (output\_calibPi0\_parameters.txt) that contains the values of various parameters of the fit for each tower, for the current pass,
75\item a file outputFile (output\_calibPi0\_coeffs.txt) that contains the value of the calib coeff for each tower, for the current pass.
76Once the code has been run and the output files copied to the relevant output directory, I copy output\_calibPi0\_coeffs.txt to output\_calibPi0\_coeffs\_clean.txt, and modify the latter by hand to put the desired calib coeffs where we estimate that they can't be trusted.
799 parameters are defined to qualify the invariant mass distribution in each tower : the distribution is fitted by a gaussian + pol2 for the combinatorial background. The parameters are :
81\item amplitude of gaussian fit,
82\item mean of the gaussian fit,
83\item sigma of the gaussian fit,
84\item c, b and a parameters of the combinatorial background fit $ax^2+bx+c$, I (histo integral),
85\item I-S, S (integral of the gaussian fit). Minimal and maximal cut values are hardcoded (and to be changed at each iteration) for each parameter.
87When the value of all the parameters lie between both extremes, the tower (i.e. the fit values, hence the mean, hence the calculated calib coeff) is "trusted". If one or more parameter has a value beyond the max cut value or below the min cut value, the tower is "untrusted".
88Because these cut values can't be guessed in advance, the analysis code must be run twice per pass. The 1st time, so as to get the distributions of all 9 parameters, and decide on the basis of those distributions what are the suitable cut values to separate the towers to be trusted and those not to be trusted. The values are plugged in the code, and the code is then run a 2nd time, for real this time.
89The macro (currently called DrawJulienFullEMCAL6.C) runs with 1 parameter in argument (set to 10 by default) : choice, which sets the number of SMs that one desires to include in the analysis. The values are either 4 (for the older SMs), or 6 (for only the newer SMs), or 10 (for the whole EMCAL). Here is the code. The macro is run this way :
91aliroot -b -q 'macros/DrawJulienFullEMCAL6.C++(10)'
93There are various places where things must be customized before running the code ; they can be spotted by searching for this line : //CUSTOMIZE customize :.
95\item testChoice : this variable is a flag that allows to shorten the execution time for tests. 0 = not a test ; 1 = runs with only the 2 first columns of each SM ; 2 = runs with only the 2 first columns of the first SM,
96\item the root input file f05,
97\item the text input file txtFilePrevCalib (in principle not the name, only the path),
98\item the text input file txtFileIn (in principle not the name, only the path),
99\item if necessary : the min and max range values for the parameter histograms : tabMin and tabMax,
100\item the min and max cut values for the parameters cutMin and cutMax,
101\item if necessary : the number of bins in pT (for the 1st section, see below) nbPtBins and their range tabPtBins.
102\item Text output on the standard output ("printf's") :
105Finally, the first iteration needs the recalibration factors. This file is made running macros/RecalibrationFactors\_TextToHistoJulien\_mult\_2012.C on the output\_calibPi0\_coeffs.txt file. Once the RecalibrationFactors.root file is created it needs to be linked properly to re-run the reconstruction.
02582a78 107
108\subsubsection{Energy calibration: Run by run temperature gain variations - Evi, David }
f8cd2d81 110The SuperModules calibration depends on the temperature dependence of the different towers gains. We observe that from one period to other, where the T changes, the $\pi^{0}$ peak positions also changes. There are 2 ways to correct for this effect : either measure the mean T per run, and get the gain curves per tower a calculate the corresponding correction; or use the calibration LED events to quantify the variation from one reference run. Each of those 2 procedures have problems, poor or lack of knowledge of the gain curves of some towers or bad performance of the LED system in certain regions.
02582a78 111
112\subsubsection{Time calibration - Marie }
f8cd2d81 114The time of the amplitude measured by a given cell is a good candidate to reject noisy towers, identify pile up events, or even identify heavy hadrons at low energy. The average time is around 650 ns. The aim of the time calibration is to move this mean value to 0, with as small spread as possible (negative values are unavoidable for the moment).
02582a78 115
116\subsection{Alignment - Marco}
f8cd2d81 118CERN provides survey measurements of the position of different EMCAL Supermodules points at the beginning of the running period (and on request?). As soon this information is available, the ideal EMCAL positions used in the reconstruction by default, are corrected with special position matrices calculated from the measurements. Finally, once the data is reconstructed, the accuracy of the alignment is cross checked with track matching and $\pi^{0}$ mass measurements, since those values change depending on variations on the positions of the SuperModules.
02582a78 119
120\subsection{Bad channel finding - Alexis}
f8cd2d81 122The analysis is done on the output of QA histograms. The idea is to check distributions over the cells of:
02582a78 123\begin{itemize}
124\item average energy (criteria 1) and
125\item average number of hit per event (criteria 2) (average computed for E > Emin)
126\item Shape criteria : $\chi^{2}/ndf$ (criteria 3), A (criteria 4) and B (criteria 5) which are parameters from the fit of each cell amplitude (the fit function is $A*e^{-B*x}/x^2$ and the fit range is from Emin to Emax).
02582a78 127\end{itemize}
f8cd2d81 128Each criteria is ran once, at each step, and the marked cells are excluded (above nsigma from mean value) to compute the next distribution. \footnote{For each criteria we have some parameters Emin (min energy) Emax, (max energy for the Energy distribution fit), and nsigma, nb of sigma we use for excluding the cell;}
02582a78 129
02582a78 130
f8cd2d81 131The typical nsigma used is 4 or 5.
132The min energy considered is 0.1 GeV -0.3 GeV. And the max energy for the fit depends on the data. Bad/warm channels are not detected automatically. The distinction is made by a visual check, so it is at some point subjective. (??????)
02582a78 133
f8cd2d81 134The cells are then marked bad or warm and passed through OCDB, in the reconstruction pass, the bad ones are excluded.