CommitLineData
02582a78 1
2\section{Reconstruction strategy \label{sec:strategy}}
3
4\subsection{Calibration}
5
6Here we describe how are obtained different correction factors needed : energy calibration (MIP, pi0, run by run), time calibration and bad channel mask.\
7
8All these correction factors or masks are stored in the OCDB but also the OADB. Since these calibration parameters do not arrive before full ALICE data reconstructions of the first periods are done, the calibration is stored not only in the OCDB but also in the OADB so that the clusters can be corrected at the analysis level. For the moment we do not store the time calibration and run by run correction factors in OCDB just in OADB.
9
10\subsubsection{Energy calibration: MIP calibration before installation - Julien}
11\subsubsection{Energy calibration: $\pi^{0}$ - Catherine}
12
13First, the calibration is done on cosmic measurements done before installing the SuperModules at P2, but the accuracy obtained using MIPs is not good enough. We rely already during the data taking on the measurement of the pi0 mass position per cell. For this we require of the order of 100-200 M events triggered by EMCAL (trigger threshold at 1.5-2 GeV). A few iterations on the data, obtaining in each iteration improved calibration coefficients, are needed to achieve a good accuracy (1-2\%). Since the online calibration has a strong effect on the trigger efficiency, the voltage gains of the APDs are varied after each running period, to get a equalized trigger performance. Still there will be some towers that due to the fact that they are behind of a lot of material (TRD support structures), that will be difficult to calibrate, for those MIPs or J/Psi measurement could help, but we have not arrived to the point of being able to use them ALICE has a reconstruction strategy mainly driven by the central barrel detectors. Run by run a calibration pass (CPass) is done with only a restricted amount of the run statistics. This is insufficient for the calorimeters so that is why we do not participate actively on such passes, except for QA purposes. Since we do not enter in this strategy, we need to get the best calibration as soon as possible, for this reason special calibration runs are requested at the beginning of the running period, and as soon as the manpower is available, the calibration parameters are produced. For details on calibration strategy see this presentation on a special calibration session.
14
15\subsubsection{Energy calibration: Run by run temperature gain variations - Evi, David }
16
17The SuperModules calibration depends on the Temperature dependence of the different towers gains. We observe that from one period to other, where the T changes, the pi0 peak positions also changes. There are 2 ways to correct for this effect : measure the mean T per run, and get the gain curves per tower a calculate the corresponding correction; use the calibration LED events to quantify the variation from one reference run. These 2 procedures have problems, poor or lack of knowledge of the gain curves of some towers or bad performance of the LED system in certain regions.
18
19\subsubsection{Time calibration - Marie }
20
21The time of the amplitude measure by a given cell is a good candidate to reject noisy towers, identify pile up events, or even identify heavy hadrons at low energy. The average time is around 650 ns. The aim of the time calibration is to move this mean value to 0, with as small spread as possible (negative values are unavoidable for the moment).
22
23\subsection{Alignment - Marco}
24
25CERN provides survey measurements of the position of different EMCAL Supermodules points at the beginning of the running period (and on request?). As soon this information is available, the ideal EMCAL positions used in the reconstruction by default, are corrected with special position matrices calculated from the measurements. Finally, once the data is reconstructed, the accuracy of the alignment is cross checked with track matching and pi0 mass measurements, since those values change depending on variations on the positions of the SuperModules.
26
28
29The analysis is done on the output of QA histograms:
30
31check distribution over the cells of:
32\begin{itemize}
33\item average energy (criteria 1) and
34\item average number of hit per event (criteria 2) (average computed for E > Emin)
35\item Shape criteria : $\chi^{2}/ndf$ (criteria 3), A (criteria 4) and B (criteria 5) which are parameters from the fit of each cell amplitude (the fit function is $A*e^{-B*x}/x^2$ and the fit range is from Emin to Emax).
36 we run each criteria once , at each step we exclude the marked cells (above nsigma from mean value) to compute the next distribution.
37\end{itemize}
38
39(For each criteria we have some parameters Emin (min energy) Emax, (max energy for the Energy distribution fit), and nsigma, nb of sigma we use for excluding the cell;)
40
41
42The typical nsigma used is 4 or 5;
43The min energy considered is 0.1 GeV -0.3 GeV. And max energy for fit is depending on the data we are looking at.
44
45We do not distinguish bad/warm automatically, this distinction is made "by a visual" check so it is at some point subjective.
46
47The cells are then marked as bad or warm and passed through OCDB, in the reconstruction pass, the bad ones are excluded.