WRanalysis
 All Classes Namespaces Files Functions Variables Typedefs Macros Groups Pages
WRanalysis Documentation

Table of Contents


Download instructions:

Follow the link Instructions

Developer instructions:

Follow the link Developer instructions

Files Governing Analysis Workflow Configuration

All configuration files and parameters which are used to process minitrees are stored in configs/ and data/.

Main configuration file
The file is configs/2015-v1.conf lists the json file, minitree and skim version number, the blinded or unblinded status of minitrees made from collision data, and the location of the .dat file which lists all collision data and MC samples from which minitrees have been made.

The following information is also stored in configs/:

Datasets
The datasets which have been processed into minitrees are listed in
configs/datasets.dat

Triggers List of triggers used to filter events at skim level

Utilities

How to estimate cross sections
Once the dataset is entered in the datasets.dat config file putting dashes for cross section, cross section uncetainty, then you can use the script
 ./scripts/getXsec.sh
to update the file with the estimated cross sections Cross sections for SM processes can be found here: https://twiki.cern.ch/twiki/bin/viewauth/CMS/StandardModelCrossSectionsat13TeV

Instructions on how to get the cross sections of a MC sample: https://twiki.cern.ch/twiki/bin/view/CMS/HowToGenXSecAnalyzer

Analysis sequence

Skims

Follow the link Skim

Note
microAOD_cff

In order to make the skims:

scripts/crabSkimJobs/makeAndRunCrabSkimScripts.sh

and then submit the relevant crab tasks

for skim in tmp/skim*.py; do crab submit -c $skim; done

To update the list of datasets with the published skims:

scripts/crabSkimJobs/updateDatasetsWithSkims.sh

MiniTree production

The actual code to produce miniTree works only on LXPLUS at CERN on the batch system using CRAB2.

To load CRAB2 environment in bash:

source /afs/cern.ch/cms/LCG/LCG-2/UI/cms_ui_env.sh
cmsenv
source /afs/cern.ch/cms/ccs/wm/scripts/Crab/crab.sh
voms-proxy-init -voms cms -out $HOME/gpi.out

If the USER is not Shervin, the SCHEDULER should be set to LSF like:

./script/makeLSFTTreeScripts.sh --scheduler=lsf

To check the status of the jobs and merge the jobs:

./script/makeLSFTTreeScripts.sh --scheduler=lsf --check

CAVEAT: the script is set to save the ntuples in Shervin's area on EOS. It has not been yet generalized. So it is not supposed to work for other users in other storage elements on-the-fly. On can also use –scheduler=remoteGlidein to run over the GRID. The storage element is still CERN. LSF has not been tested.

the cfg to be used is

test/runAnalysis_cfg.py

There are four paths defined in runAnalysis_cfg.py:

Lepton and jet IDs are applied when these paths are run, and filters on the number of ID'd objects is applied in the SignalRegion, FlavourSideband, and LowDiLeptonSideband (by the dilepton object candidate filters). No such filter is applied in the DYtagAndProbe path.

Once minitrees are made by runAnalysis_cfg.py, analysis.cpp can be used once a grid proxy is created.

Validation of the production

See Validation scripts

TTBar estimation using collision data

See Estimate of the TTbar background from data

DY estimation

See Estimate of the DY+Jets background using MC and collision data

_XSECLIMITS

See Extracting 1D and 2D WR cross section and exclusion limits

AOB

JEC
The twiki page is: https://twiki.cern.ch/twiki/bin/viewauth/CMS/JECDataMC

74X_mcRun2_asymptotic_v5

selection ID scale factors

Electrons: https://cds.cern.ch/record/2118397/export/hx?ln=en

Muons: https://twiki.cern.ch/twiki/bin/viewauth/CMS/MuonReferenceEffsRun2

Energy corrections
muons https://twiki.cern.ch/twiki/bin/viewauth/CMS/RochcorMuon