gconesab [Sun, 9 Oct 2011 09:36:55 +0000 (09:36 +0000)]
Coverity fixes in QA and Pi0EbE
Pi0EbE: Add time per selected cluster and ncells per selected cluster histogram, remove diff time vs asymmetry histogram; Simplify code filling shower shape histograms, move to method
Electron: Add time histogram
richterm [Sat, 8 Oct 2011 23:21:14 +0000 (23:21 +0000)]
adding histogram to monitor the number of failed cluster verifications; bugfix: correcting the sigmaY/Z^2 histograms by taking pad width and time pitch into account
richterm [Sat, 8 Oct 2011 23:18:27 +0000 (23:18 +0000)]
better assignment of remaining clusters to tracks; minor bugfix in the calculation of the compression ratio: in verification mode the cluster id blocks had been wrongly taken into account
richterm [Sat, 8 Oct 2011 23:14:26 +0000 (23:14 +0000)]
sorting the raw trackpoints in the array to be better consistent with loop over padrows, the unordered sequence caused in rare circumstances decoding errors due to misidentification of trackpoint ids with rows; adding fatal error message if pad or time of an associate cluster exceeds the range for the compressed format
gconesab [Sat, 8 Oct 2011 10:53:32 +0000 (10:53 +0000)]
Major refactoring of the class, many parts of the code moved to different methods, to make the code more readable
Methods to study logaritmic weight added
esicking [Fri, 7 Oct 2011 13:00:15 +0000 (13:00 +0000)]
Added analysis task for effiency studies (original by Veronica Canoa Roman). Now, different track types can be tested (global, TPC, ITS). It can be used to study nuclei and anti-nuclei transport, too.
richterm [Fri, 7 Oct 2011 10:47:09 +0000 (10:47 +0000)]
Using the AliHLTTPCDataCompressionDecoder in the offline cluster access through
a container class for TClonesArrays of AliTPCclusterMI.
The access interface AliHLTTPCClusterAccessHLTOUT::Execute propagates the
following return values in the optional 3rd parameter:
* - >=0 success, number of clusters
* - -ENODATA no data in HLTOUT
* - -EINVAL invalid parameter/argument
* - -ENOMEM memory allocation failed
* - -EACCESS no access to HLTOUT
* - -NODEV internal error, can not get AliHLTSystem
* - -ENOBUFS internal error, can not get cluster array
AliHLTTPCDataCompressionComponent now forwards the cluster MC labels unchanged
since the mapping is done in the decoding.
Correcting the calculation of the compression factor, which was affected by the
additional cluster id data blocks.
jgrosseo [Thu, 6 Oct 2011 11:39:15 +0000 (11:39 +0000)]
centrality task runs over all events (before only over kMB)
pass number configurable
file event_stat.root registered with manager in add phys sel macro
richterm [Thu, 6 Oct 2011 10:44:44 +0000 (10:44 +0000)]
defining chain 'TPC-hltout-compressionmonitor' to run the TPCDataCompressorMonitor component directly on recorded data in HLTOUT; removing all command line arguments from the TPC-compression configuration, initialize from OCDB
cholm [Wed, 5 Oct 2011 13:37:41 +0000 (13:37 +0000)]
Some fixes for the QA train:
- Event inspector writes run number to histogram file
- ForwardQA task does a little debugging - if enabled
- Energy fitter now produces another histogram with number
of available, empty, low statistics, candidate, and
actually fitted energy loss spectra
- Script to add task sets up output container in "trending.root"
- Sharing filter has more information in the output
- MC event inspector updated to fit interface
- Clean-up in ForwardAODConfig.C
- (Shell) script to get output of QA train
cvetan [Tue, 4 Oct 2011 12:16:16 +0000 (12:16 +0000)]
Protection against missing DPs. Recently we got some runs where the RDB manager got overloaded leading to missing DPs. From now on in these case the preprocessor will fail. To be ported to the release once the nightly test is passed.
rgrosso [Mon, 3 Oct 2011 21:40:36 +0000 (21:40 +0000)]
Implementing the possibility of loading the CDB as a snapshot in two ways:
1) you suppose you have it locally, in ./OCDB. That will be the case if you untar a CDB tarball.
2) you suppose you can find in some place in alien a root file containing both a map of the
CDB entries and a list of the corresponding Ids (for the UserInfo)
1) is triggered by using:
cdb->SetDefaultStorage("snapshot://*) where one would put instead of "*" the original alien
location from where the tarball has been built.
2) is triggered by the call:
reco->SetFromCDBSnapshot(filename)
where filename is the name of the file containing the entries map and the ids
list.
1) is implemented in AliCDBLocal and AliCDBLocalParam changes
2) is implemented in AliReconstruction and AliCDBManager changes. For 2) also
the macro macros/MakeCDBSnapshot.C is added, as an example of how to produce
the snapshot file.
Additionally two coding conventions fixed in AliCDBManager (adding const to two parameters).
richterm [Mon, 3 Oct 2011 13:53:27 +0000 (13:53 +0000)]
adding configuration TPC-compression-monitoring; removing mode parameter for TPC-compression-huffman-trainer to be more flexible in the generation of the huffman table
richterm [Mon, 3 Oct 2011 13:49:18 +0000 (13:49 +0000)]
bugfix: clean monitoring container only after it has been archived; adding option -cluster-verification to switch verification mode of TPCDataCompressor; bugfix in capacity check for cluster id data block
richterm [Mon, 3 Oct 2011 13:21:11 +0000 (13:21 +0000)]
adding an interface to fill different types of containers from the compressed cluster data; adding monitoring histograms; adding comparision with original HW cluster data
morsch [Mon, 3 Oct 2011 12:32:03 +0000 (12:32 +0000)]
There are two modification:
- [2011 OCDB-OADB synchronization]: it's the port of a OCDB modification committed the past week so that survey11 collection corresponds to the OCDB content (as
it should)
- [new collection with GetObject_by_run access]: added the new collection 'EmcalMatrices'. It's the general collection where all the official matrices are stored:
there is an object per year (as for OCDB), the validity ranges are the same as for OCDB. So, now the sets of matrices can be obtained "by run number" (getting the
official set for that run) or "by collection name" (still there for back-compatibility, but also to keep some modified alignment sets used for test and
systematics studies)
hristov [Mon, 3 Oct 2011 11:15:23 +0000 (11:15 +0000)]
Changes for #87331: Combined commit MUON+HLT
AliMUONDigitCalibrator
Simplified the ctors interface (removed a default parameter)
AliMUONReconstructor
Propagate AliMUONDigitCalibrator interface change
Change order of delete calls in the destructors (to avoid access dead OCDB objects in dtor of calibrator)
AliMUONTrackerDataMaker
Propagate AliMUONDigitCalibrator interface change
AliMUON
Propagate changes in AliMUONDigitizerV3 (e.g. pass the recoparams when creating the digitizer)
AliMUONVDigit
AliMUONRealDigit
AliMUONDigit
All digits now have the ChargeInFC method defined
AliMUONDigitizerV3
Now uses the RecoParams to know how to decalibrate.
Now assumes that all digits to be decalibrated have their charge in fC, as it should be...
One more step towards removal of specific OCDB storages for simulations : take
MUON/Calib/Gains from raw OCDB.
Added an option "-b anchorRunNumber" to AlirootRun_MUONtest.sh in order to test realistic simulations
(i.e. using an anchor run). Note that this option references a couple of "private" OCDB files. This is
only an interim solution...
Config.C
Added a parameter to know when we're doing realistic simulations, in which case we must
instantiante the ITS to get the correct propagation of the simulated vertex to the reconstruction.