The Property Of Atlas Magnet System Biology Essay

In lepton+jet channels, the electron+jets and muon+jet channel are both 15% of the total ttbar decay class. While in di-lepton channel, the fraction of electron+electron, muon+muon, and electron+muon are 1%, 1% and 2%, respectively. Beside the top decay to W boson and bottom quark, it can also decays to W boson and charm quark(0.1%) or W boson and down quark(0.01%).

The LHC and ATLAS detector

Large Hadron Collider

The LHC (Large Hadron Collider)[19] is the largest and highest-energy superconducting accelerator in the world by now. It is installed in a 26.7 km tunnel which was constructed for the CERN LEP machine at a depth ranging from 50 to 175 meters underground. The LHC was built by a collaboration of over 10,000 scientists and engineers from about 100 countries. Fig 2.1 shows the four major detectors at the four interaction points (IP) of the LHC:

ATLAS[22] (A Toroidal LHC ApparatuS) at IP1 and CMS[23] (Compact Muon Solenoid) at IP5: They are high luminosity experiments being able to work at an expected peak luminosity of L = 1034cm-2s-1, which are designed for the search of Higgs boson, supersymmetry particles, dark matters and other new physics beyond Standard Model. Because of the independence of the detectors , the results from them can cross check each other .

LHCb[24] (Large Hadron Collider beauty) at IP8 : It is dedicated to measure the parameters of the CP violation and phenomena in the decay of b hadrons with a peak luminosity of L = 1032cm-2s-1.

ALICE[27] (A Large Ion Collider Experiment) at IP8: ALICE, aiming at a peak luminosity of L = 1027cm-2s-1 , is designed for the study of the Pb-Pb collision operated with ion beams .

There are another two small experiments LHCf[26] (Large Hadron Collider forward) and TOTEM[25] (TOTal Elastic and diffractive cross section Measurement) installed on the LHC accelerator ring near the ATLAS region and CMS region separately. The LHCf is the smallest one of the LHC experiments, and designed to study the neutral-particle production cross section in the very forward region of proton-proton and nucleus-nucleus interactions. TOTEM is used for the detection of protons from elastic scattering at small angles, aiming at a peak luminosity of L = 2��1029cm2 s-1 with 156 bunches

The Protons in LHC are from the injector chain Linac2 �� Proton Synchrotron Booster (PSB) �� Proton Synchrotron (PS) �� Super Proton Synchrotron (SPS) as shown in Fig 2.2[19][20]. The protons origin from the Linac2 are accelerated to 1.4GeV at the PSB, and then boosted to 26GeV at PS. The SPS, CERN��s second biggest accelerator, increase the energy of these protons to 450GeV. The lead ions inject from LINAC3 to Low Energy Iron Ring(LEIR) and then follows the same procedure as the protons.

Fig 2.1 The LHC and four detectors on it

Fig 2.2 The LHC Accelerator Complex[19][20]

The design center-of-mass energy for LHC proton-proton collision is 14TeV. Since the year 2010 and 2011, the LHC delivered a integrated luminosity 5.6fb-1 for ALTAS and 5.7fb-1 for CMS at energy =7TeV[21]. Then in 2012, the LHC started its run from the energy=8TeV. The luminosity status at LHC in 2011 is shown in Fig 2.3

Fig 2.3 Luminosity status at LHC in 2011[21]

The ATLAS detector

ATLAS[22] is one of the two general-purpose experiment at the LHC. It is 46m in length and 25m in height, and has a weight about 7,000 tons. It is composed of four main sub-systems: the magnet system[28], the inner detector[32][33], the calorimeter[34][35] and the muon spectrometer[36]. The layout of ATLAS is shown as in Fig 2.4. The ATLAS experiment is designed to search for the Higgs boson, measure the electroweak interaction and new physics beyond the Standard Model.

Fig 2.4 The ATLAS detector

Magnet system

The magnet system of ATLAS detector is composed of four parts , which are the central solenoid (CS), the barrel toroid (BT) and two end-cap toroids(ECT). This magnetic field extends in an area of 26m in length and 22m in diameter. The positions of the four parts are shown as in Fig 2.4.

The central solenoid is located around the ATLAS Inner Detector and installed along the beam axis. Its provides a 2T axial magnetic field for the inner detector. Its inner and outer diameters are 2.46 m and 2.56 m, and axial length is 5.8 m. The barrel toroid, which is 25.3 m in length, with inner and outer diameters of 9.4 m and 20.1 m, is made up of 8 coils assembled symmetrically and radially around the beam axis. It provides a 3.9T magnetic field for the muon spectrometer. The end-cap toroids are designed to provide the magnetic field for the bending power at the end-cap regions of the muon spectrometer. All the parameters of these parts are list in Table 2.1 below.

Table 2.1 The property of ATLAS magnet system

Inner detector

The ATLAS Inner Detector(as shown in Fig 2.5), which is located close to the beam pipe, is about 7 m long and has a radius of about 1.15m. It is designed to provide excellent momentum resolution as well as primary and secondary vertex measurements for the charged tracks within the pseudorapidity range |��| < 2.5. The ID is also a detector to do the electron identification over |��|<2.0. The ID is installed in a cylindrical envelope within a solenoidal magnetic field of 2T. The ID are made up of three sub-detectors : a Pixel detector, a Silicon Strip Tracker(SCT) and a Transition Radiation Tracker (TRT) .

Fig 2.5 The ATLAS Inner Detector

The pixel detector is the innermost sub-detector of ID, and contains three barrel layers and three disks on each end-cap. The pixel detector contain 1744 modules with 46080 read-out channels for each module. With the highly modules , the pixel detector can achieve a very good position recognition capability for the charged particle . The size for barrel and end-cap pixels are 10 ��m in R-��, and 115 ��m in z direction.

The SCT detector is at the middle part of the ID , it consists of four layers in the barrel region with 2122 modules and two end-caps each containing nine disks. The total modules for the nine disk is 1966. The nominal resolution is 17 ��m in-plane lateral , and 580 ��m in-plane longitudinal . The SCT is the most important pars for the position and momentum measurement as it cover a larger area and more measurement point compared to the pixel.

The Transition Radiation Tracker is designed to track down on the charged particles as well as the identification of electron. It is composed of drift straw tubes with a diameter of 4 mm. The TRT contains 73 layers of straws interleaved with fibres (barrel) and 160 straw planes interleaved with foils (end-cap), which process transition radiation for electron identification. All charged tracks with Pt > 0.5 GeV and |��| < 2.0 will traverse at least 36 straws, while in the barrel-end-cap transition region (|��|~ 0.8-1.0) the straws decrease to 22 crossed straws. The TRT resolution in R-�� direction is 130 ��m.

Calorimeters

The ATLAS calorimeters are sampling calorimeters designed to identify object (electron, photons, jets and E_T^miss) by absorbing and measuring the energy of them . The energy of electron and photon through bremsstrahlung process is absorbed by the material traversed. In the calorimeters, these process lead to a shower of Electromagnetic (EM) particles. Also, another shower so-called jet, is formed through the hadronization process of gluon and quark. The ATLAS calorimeters cover a range |��| < 4.9, and is divided into two part : the electromagnetic calorimeter and the hadronic calorimeter. Because of the fine granularity of the EM calorimeter over the eta region matched to the inner detector, the precision measurements of electrons and photons can be done. The rest of the calorimeter is adequate to satisfy the physics requirements for jet reconstruction and E_T^miss measurements. The structure of the ATLAS calorimeters are shown as in Fig 2.6.

Fig 2.6 The ATLAS Calorimeter

Electromagnetic calorimeter

The EM calorimeter cover a range of |��|<3.2, and have high energy and position resolutions. It is in a geometry shape of accordion , which provide a full coverage in phi without any cracks, and a fast extraction of the signal at the rear or at the front of the electrodes. It can be divided into a barrel part (|��| < 1.475) and two end-cap parts (1.375 < |��| < 3.2). Each end-cap calorimeter is composed of two coaxial wheels : an outer wheel with 1.375 < |��|< 2.5 and an inner wheel with 2.5 <|��|< 3.2..

The Barrel part is made up of two half-barrels, one cover the region with Z>0(0<��<1.475), and another with Z<0(-1.475<��<0). Each half-barrels consists of three kinds of layers according to the different granularity, which is shown as in Fig 3.5. The first layer is finely segmented along eta. The second layer collects the largest fraction of the energy of the electromagnetic shower, and the third layer collects only the tail of the electromagnetic shower and is therefore less segmented in eta. The thickness of the EM calorimeter is no less than 22 radiation lengths (X0) in the barrel and larger than 24 X0 in the end-cap. The energy resolution of EM is : ��_E/E= (10%)/��(E(GeV)) ��0.7%

Fig 2.7 A barrel module of the electromagnetic calorimeter with the cells of the three layer

Hadronic Calorimeters

The ATLAS hadronic calorimeters consist of the tile calorimeter, the liquid-argon hadronic end-cap calorimeter (HEC) and the liquid-argon forward calorimeter (FCal) with 3.1<|��|<4.9. It is designed to measure the energy of hadrons.

For the tile calorimeter, the detection medium consists of scintillator tiles and the absorber medium is iron. It is composed of three parts, one central barrel with |��| <1.0 and two extended barrels with region 0.8<|��|<1.7. Each barrel is made up of 64 modules with size ?? ~0.1. The hadronic end-cap calorimeter, with 1.5<|��|<3.2, is made up of two wheels and each wheel containing two longitudinal sections. The size of the readout cells is ?�ǡf�?=0.1��0.1 in the region |��| < 2.5 and ?�ǡf�?=0.2��0.2 for larger �� region. The forward calorimeter covers a region of 3.1<|��|<4.9, and is 4.7m away from the interaction point. Each FCal is split into three 45 cm deep modules: one electromagnetic module (FCal1) and two hadronic modules (FCal2 and FCal3). The resolution for the hadronic barrel and end-cap is ��_E/E= (50%)/��(E(GeV) ) ��3% , for the fcal ��_E/E= (100%)/��(E(GeV)) ��10% . The layout of the calorimeter modules is shown in Fig 2.8

Fig 2.8 Layout of the calorimeter modules

Muon Spectrometer

The muon generally come to the outmost of the detector as the strong ability of penetration. This led to the muon can be distinguished from other particles easily. The muon spectrometer is the outer part of the ATLAS detector and is designed to detect the muons with their momentum. The ATLAS Muon Spectrometer is composed of two kinds of chamber sub-systems: the precision-measurement tracking chambers including the monitored drift tube (MDT) and cathode strip chambers(CSC), and the trigger chambers including the resistive plate chambers(RPC) and the thin gap chambers(TGC). The structure of the moun spectrometer is shown in Fig 2.9

The chambers in the barrel are arranged in three concentric cylindrical shells around the beam axis at radii of 5 m, 7.5 m, and 10 m. In the two end-cap regions, muon chambers form large wheels, located at distances of 7.4 m, 10.8 m,14 m, and 21.5m from the interaction point.

Fig 2.9 The ATLAS Muon Spectrometer

The precision-measurement tracking chambers

These chambers are designed to do the precision measurement of the track coordinates in the principal bending direction of the magnetic field.

The MDT cover the region with a pseudorapidity |��| < 2.7. It is composed of three to eight layers of drift tubes filled with a gas mixture of 93% CO2 and 7% Argon at an absolute pressure of 3 bars. The average resolution of MDT is 80 ��m per tube, or about 35 ��m per chamber.

The Cathode Strip Chambers (CSC) ,which cover the region 2<|��| < 2.7, are designed at higher rate capability and time resolution to solve the high flux of particles at that region. The CSC are multi-wire proportional chambers with cathode planes split into strips. The resolution is 40 um per chamber in the bending plane and about 5mm in the transverse plane.

The trigger chambers

The trigger chambers are designed to study the bunch-crossing identification, Pt thresholds and muon coordinate in the direction orthogonal to that determined by the precision-tracking chambers.

The resistive plate chambers is a gaseous parallel electrode-plate detector, which is located at the barrel region with |��|<1.05. It is filled with the gas mixture of C2H2F4/Iso-C4H10/SF6. The RPC is designed to trigger the muon events with a designed bunch spacing of 25 ns. The thin gap chambers are multi-wire proportional chambers, which located in the end-cap region with 1.05 <|��|< 2.4. It provide a nominal resolution of 2-6 mm in �� and 3-7 mm in ��.

The cross-section of the ATLAS muon system is shown in Fig 2.10

Fig 2.10 Cross-section of the muon system in a plane containing the beam axis (bending plane).

Trigger and Data Acquisition

The LHC will have 40 millions of bunch crossing per second , corresponding to a rate of 40 MHz at a design luminosity of 1034cm2s-1, with each event holds about 1.5 MB disk space of storage. It is impossible to handle such a huge amount of data. In addition, most of the physics study are based on the channel with low cross section. So the trigger system is designed to select only the interesting events for the physics analysis. The trigger consists of three levels of event selection: Level-1 (L1), Level-2 (L2), and event filter(EF). The L2 and EF are known as the High Level Trigger. An overview of the ATLAS trigger system is shown in Fig 2.11.

The L1 trigger is based on the hardware system with event selected by the information from the calorimeters and muon detectors[37]. The L1 trigger searches for high transverse-momentum muons, electrons, photons, jets, as well as large missing and total transverse energy. The L1 Calorimeter Trigger is used to identify high-ET objects from all the calorimeters, and the energy isolation can be required for the electron/photon and �� triggers. The L1 muon trigger is based on the information in the muon trigger chambers: RPC in the barrel and TGC in the end-caps. The Central Trigger Processor (CTP) combines the information for those different object types and form into trigger menus for different requirements. The L1 trigger reduce the event rate from 40 MHz to less than 75 kHz. A Regions-of-Interest (RoIs) is chosen by L1 trigger, and will be used in L2 trigger.

The L2 trigger is based on the software system seeded by the RoI from L1. Besides the information of calorimeters and muon trigger chambers, the information of the inner detector and muon precise tracking chambers are also used in L2 trigger. The L2 reduces the event rate to about 3.5 kHz, with a rejection about 95% of events passed L1 trigger.

The EF trigger is also based on the software system, and uses the information of all detectors. By using better reconstruction algorithm and tighter selection of objects, the event rate is reduced to about 200 Hz.

The data acquisition system (DAQ) is shown in the Fig 2.12: After an event is selected by the L1 trigger, the data from the pipe-lines are transferred off the detector to the Readout Drivers(RODs) and stored temporarily in Readout Buffers(ROBs). Then the data in ROBs associate to the RoIs is used in L2 trigger selection. Finally, the data passed through the EF trigger selection is used for the physics analysis.

Fig 2.11 Overview of the trigger system

Fig 2.12 The diagram of the ATLAS data acquisition system

?

Objects selection in ATLAS

In this chapter, The pre-selection of the electron, muon, jet, b-tagging and E_T^miss at ATLAS are described.

Electron

Offline electron selection

The electron reconstruction is based on the cluster information from the electromagnetic calorimeter and the associated charged tracks in the Inner Detector[39][40]. At EM, the seed clusters with a transverse energy above 2.5GeV are set up with the sliding window algorithm in a 3*5 layer cell units. Then the charged track matched with the seed cluster are chosen by extrapolating the last measurement point to the middle layer. In case of more than one track associated with seed cluster, the track with silicon hits and smallest ?R= ��(?��^2+??^2 ) is taken as the best track.

The electron candidates are selected in the region of |��| < 2.47, excluding the transition region of 1.37 < |��| < 1.52, and with transverse energy ET > 25 GeV (ET = Ecluster/cosh(��track)). In addition, to provide good separation between isolated electrons and jets, electrons are required to satisfy the Tight++ criteria, which include strict selection cuts on calorimeter, tracking and combined variables.

To reduce the QCD multi-jets background, additional tight isolation cuts are applied to the electrons, with a cone size of ��R = 0.2 and ��R = 0.3 for calorimeter and track isolation, respectively. The calorimeter isolation cut is chosen by different Tight++ efficiencies point at 98%, 95% and 90%, with different cone sizes (��R = 0.2, 0.3 and 0.4). The track isolation cut is chosen by working at the efficiencies point at 99%, 98%, 97% and 90% for track isolation of ��R = 0.2, 0.3 and 0.4. For top analyses, the combination of EtCone20@90(��R = 0.2 and efficiencies point at 90%) and PtCone30@90(��R = 0.3 and efficiencies point at 90%)are commended. In addition, jets within ��R = 0.2 from the electron direction are removed from the event, this step is called jet-electron overlap removal. After the jet-electron overlap removal, if there is another jet with Pt> 20 GeV found within ��R = 0.4, the electron is discarded.

Trigger

The electrons selected in both data and MC, are firstly required to match electron trigger EF_e20_medium, EF_e22_medium and EF_e22vh medium1 in Data taken period B-H, I-K and L-M, respectively. For analyses using electrons of PT>>100 GeV, it is recommended to keep the electrons which pass electron trigger EF_e45_medium1 in period L-M to avoid efficiency loss due to the hadronic core veto at very high PT.

Efficiencies and scale factors

The electron reconstruction, identification and trigger efficiencies are measured with the T&P method using Z �� ee sample and cross checked with the W��ev sample. The scale factors, defined as ��data/��mc, is provided by ATLAS Egamma Group to improve the data and Monte Carlo agreement . The Tight++ scale factors are binned in 9 �� bins with ET-corrections in 6 ET bins, as shown in Fig 3.1

Fig 3.1 The electron ID scale factors as a function of �� and ET

Energy scale and resolution

The electron energy scales are obtained from Z �� ee, J/�� �� ee and W �� e��. The energy scale is corrected in data as a function of the electron ��, ? and ET. Energy smearing is applied to the MC to match the energy resolution in data, with the relevant uncertainties.

Muon

The Muon track is reconstructed independently in the Inner Detector(ID) and the Muon Spectrometer(MS), while the momentum information can be get by combining information from the ID, calorimeters and MS[41]. The Muon reconstruction is following the recommendations from the ATLAS Muon Combined Performance(MCP) group by using the Muid algorithm[45]. The muon object selection is defined as follow:

Muons are required to be combined.

Muons are required to be within the detector acceptance, |��| < 2.5.

Muons are required to have PT > 20 GeV

Muons are required to pass the MCP ID track quality cuts.

E_T^0.2 < 4 GeV.

p_T^0.3 < 2.5 GeV.

dR(��, j) > 0.4, where j is any jet with Pt > 25 GeV and |JVF| > 0.75.

The last three isolation requirements are used to suppress the backgrounds originating from heavy flavor decays.

Trigger

The single muon triggers used in the 2011 data analyses are mu18 and mu18_medium. These triggers differ by the tightening of the Level one criteria in the mu18 medium trigger chain. The mu18 trigger is unprescaled in all runs up-to the end of period I. The mu18_medium is unprescaled in the remainder of the data. Both triggers are available in the MC11 simulation.

Efficiency and Scale Factors

The efficiencies and scale factors of the triggers, isolation and reconstruction requirements are measured using the tag-and-probe method in Z �� �̦� events, estimated by the MCP group to improve the MC agreement with data. The efficiency changes during the run, which is due to hardware issues in individual trigger chambers. The efficiencies are parameterized as a function of muon �� and ��. The measured scale factors are shown in Fig 3.2. The scale factors are split into the three data taking periods: B-I, J-K and L-M.

Fig 3.2 the trigger scale factors of muon for (a) the mu18 trigger and (b) the mu18_medium

Jet

The color-charged quarks and gluons fragments into a spray of hadrons before they can be directly detected, this spray is so called jet. The jet forms into electromagnetic and hadronic showers in the calorimeter of ATLAS detector. These showers are measured in calorimeter clusters by using the jet finding algorithms. The jets used in our study are reconstructed with the anti-kt algorithm (R=0.4)[47] starting from topological clusters which are built from energy deposited in the calorimeter. Jet finding is performed on topological clusters at the electromagnetic (EM) scale, which accounts for the energy deposited by electrons or photons.

Jet calibration

The jet calibration is applied to jet at the EM scale, with a pile-up subtraction scheme that accounts for the effect of both in-time and out-of-time pile-up[49]. This correction is parameterized according to the number of average interactions in a luminosity block (��), the number of primary vertices in an event (NPV), in bins of jet pseudorapidity ��. Jets are then calibrated to the hadronic scale using Monte Carlo-based Pt and �� dependent correction factors.

Jet energy scale uncertainty

The estimate of the jet energy scale uncertainty is provided with data from the full 2011 datasets by using residual uncertainties after the in-situ correction. This estimate has a good precision as it describes the various uncertainty sources as separate parameters, which are fully correlated across transverse momentum and pseudorapidity and uncorrelated among themselves. A total of 61 nuisance parameters are available from the in-situ analysis. The number of sources for the baseline uncertainty is reduced to 6 by using a matrix diagonalization technique and limiting the range of the correlation matrix to 600 GeV, which covering the main kinematic range for jets in top analyses. Further sources of uncertainties due to JES calibration, high-pT extrapolation, intercalibration for jets and pile-up are also considered, leading to 12 nuisance parameters. Additional 3 uncertainty contributions arising from the light-quark and gluon flavour composition and the difference in the jet energy scale in the presence of a close-by jet are considered for the multi-jet environment of top and top background events. An additional pT dependent uncertainty for the jets matched to b-jets. The total uncertainty is composed of a total of 16 nuisance parameters.

Jet energy resolution

The jet energy resolution (JER) are measured with the di-jet balance and the bi-sector techniques[50]. As the resolutions measured in data and Monte Carlo agree within uncertainty, no systematic smearing is applied to jets in Monte Carlo simulation for central value measured. The uncertainty on the jet energy resolution is measured by smearing jets according to the systematic uncertainties of the resolution measurement in the full 2011 dataset.

Jet selection: jet quality and pile-up rejection

In case the jets come from the beam-gas interactions, cosmic rays and large calorimeter noise instead of produced by in-time real energy deposits in the calorimeters, the jet quality criteria are applied to identify those so-called bad jets. The bad jets are tagged by using selections on the energy fractions in the electromagnetic and hadronic end-cap calorimeters, the pulse signal shape from the calorimeter cell, the jet time comparing to beam collision, as well as the track information.

A cut on the Jet Vertex Fraction (JVF)[51] is applied to reduce the effect of in-time pile-up. This variable computes the fraction of tracks coming from the primary vertex associated to the jet, which rely on a discriminant to estimate the probability of the case that this jet come from the pile-up. Aiming at an high selection efficiency of hard scatter jets and best rejection for pile-up jets, the optimal working point for standard top analyses is when rejecting jets with |JVF| <0.75. Efficiency and inefficiency scale factors are available for a working point of |JVF| <0.75 based on the data/Monte Carlo comparison of Z�� �̦� and Z�� ee events.

Jet reconstruction efficiency

The calorimeter jet reconstruction efficiency[72] was derived from charged tracks which reconstructed in the inner detector system and used to built jets. The reconstruction efficiency was defined as the fraction of probe track-jets matched to a calorimeter jet and measured by using a tag and probe method. The difference between Data and Monte Carlo was applied to Monte Carlo by discarding a fraction of jets taken at random within the inefficiency range.

E_T^miss Description and Performance

The ATLAS Jet/ETmiss Working Group recommends an object based E_T^miss , MET_RefFinal[52], which the associated topological clusters are calibrated to the electromagnetic scale (EM scale). Following a similar approach, the E_T^miss( MET_RefFinal_em_tightpp ) used by the Top Group are consistent with the Top objects reconstruction definitions. These objects include electrons, photons, jets, soft jets, and muons. The remaining energy from cells not associated with these high pT object are included as a Cell Out term, and is calibrated to the EM scale. The E_T^miss is calculated as:

E_T^miss=��(����(E��_x^miss)��^2+ ����(E��_y^miss)��^2 ) (3.1)

Where

��-E��_(x,y)^miss=E_(x,y)^RefElec+E_(x,y)^RefJet+E_(x,y)^RefSofJet+E_(x,y)^RefMuon+E_(x,y)^CellOut

The electron term uses electrons from the Electron AOD Collection passed the electron identification cut : isEM::ElectronTightPP definition with a Pt > 10 GeV with all the electron correction factors except the out-of-cluster correction. Photons and tau objects are not included in the calculation. The jets are divided into two categories: refined jets which are included into the E_T^miss at the EM+JES energy scale and soft jets which are included at the EM scale. The refined jets are any jet in the AntiKt4TopoEMJets with a Pt > 20 GeV, while the jets between 7 GeV and 20 GeV are included as soft jets. The muon term in the E_T^miss is determined from the pT of muons from the MuidMuonCollection for the full acceptance range of the muon spectrometers, |��| < 2.7. All combined muons within |��| < 2.5 are included in the E_T^miss.The muon term in the E_T^miss also contains isolated muons (MET_MU_TRACK) and non-isolated muons (MET_MU_SPECTRO). The MET_MU _TRACK requires the tracks to be isolated from all AntiKt4TopoEMJets (cone size=0.4) by dr= ��((��?��)��^2+��(?��)��^2 ) =0.3, and includes the muon energy deposited in the calorimeter in the Cell Out term. For the MET_MU_SPECTRO muons the energy deposited in the calorimeter is included in the jet term.

The primary sources of uncertainty related to the E_T^misscome from the scale and resolution of the objects which the E_T^miss is reconstructed from and the description of additional calorimeter energy from pile-up events. For top analyses using E_T^miss, the overall systematic uncertainty on E_T^miss scale is obtained using the uncertainty on the constituent terms depending on the final state - notably the uncertainty in scale and resolution of charged lepton(s), jets and to a smaller extent the uncertainties of topological clusters outside of any of the above reconstructed objects (CellOut term) and the soft jets from underlying event (SoftJets term)

B tagging

B-jet identification so called b-tagging, is one of the most important selection cut for events containing top quarks. The separation of the b jets from other jets can help to reduce the background processed with few b jets. The identification of jets containing b-quarks based on their specific properties. The b-hadron from b-quark hadronization remains about 70% of the original b-quark momentum and has a large mass about 5 GeV. These properties make the decay products of the b-hadron having large transverse momentums and large opening angles. In addition, the b-hadron travels with a relatively long distance before decaying due to its long lifetime. This leads to measurable secondary vertices and impact parameters of the decay products. The transverse and longitudinal impact parameters d0 and z0 are defined as the transverse ((r/��)-projection) and z-coordinate of a track at the point of closest approach of the track to the primary vertex, respectively.

The b-tagging Algorithms

There are many b-tagging algorithms developed in ATLAS base of the b-quark properties[53], such as the secondary vertex based algorithms SV0, SV1, the impact parameter based algorithms JetProb, IP1D, IP3D, and JetFitter algorithms based on topology of b-hadron decay. For the reconstructions of events with top quarks, a combination of the three b-tagging algorithms (JetFitter, IP3D and SV1) is used to extract a final tagging discrimination weight for each jet. The so-called MV1 tagger takes the weights of those tree tagging algorithms as well pT and �� of the jet as an input to a neural network (TMVA) to determine a single discriminant variable. The weight distribution and the rejection power of the MV1 tagger is shown in Fig 3.3. In the comparison of the rejection factors of various taggers provided, it is obvious, that the MV1 tagger has the best performance among the tagging algorithms provided at ATLAS and is therefore strongly recommended to be used for all kind of top-physics analyses.

Fig 3.3 The rejection as a function of the tagging efficiency for different b-jet taggers

After all , the event selection in ttbar semi-lepton channel is summarized as below

Electron channel:

1. The electron trigger must have fired.

2. The event must contain at least one good vertex (of type primary vertex or pileUp vertex) with more than 4 tracks

3. Exactly one good electron was found.

4. No good muon was found.

5. The good electron must match the 114 object that fired the trigger.

6. For data only: No LooseBad jet may be included.

7. E_T^miss > 30 GeV is required.

8. The transverse W mass1 mT(W) must be larger than 30 GeV.

9. At least four good jets with pT > 25 GeV and a jet vertex fraction |JVF| > 0.75 were found.

10. For the b-jet definition the mv1 tagger at 70% working point was used.

Muon Channel

1. The muon trigger must have fired.

2. The event must contain at least one good vertex (of type PriVtx or PileUp) with at least 4 tracks .

3. Exactly one good muon was found.

4. No good electron was found.

5. The good muon must match the object that fired the trigger.

6. Electrons and muons must not share a track.

7. For data only: No LooseBad jet may be included.

8. E_T^miss > 20 GeV is required.

9. mT(W) + E_T^miss must be larger than 60 GeV.

10. At least four good jets with pT > 25 GeV and a jet vertex fraction |JVF| > 0.75 were found.

11. For the b-jet definition the mv1 tagger at 70% working point was used.

?

?

Semi-lepton event selection

Data

The data samples used in this thesis corresponds to 4.7 pb-1 of 7TeV proton-proton collision data collected by the ATLAS detector in 2011. For the tt ? analysis events are triggered using a high-pT single lepton trigger (electron or muon)[68].

The MC samples

Top quark pair events are simulated by the MC@NLO Monte Carlo generator where top mass is set to 172.5 GeV[69][70]. Parton showering and the underlying event are modelled using HERWIG and JIMMY, respectively. W + jets and Z + jets are generated with AlpGen. W + c, W + c��c and W/Z + b��b events are generated separately with AlpGen and double counting is avoided by removing events with hard Herwig-produced b- and c-quarks from the W/Z+jets samples. Single top events are also generated using MC@NLO while the production of Diboson events (WW,WZ, ZZ) are generated using HERWIG. All Monte Carlo simulation samples are generated with multiple pp interactions per bunch crossing (pileup). These simulated events are re-weighted so that the distribution of the number of interactions per crossing in simulation matches that in the data. The samples are then processed through the GEANT4[71] simulation of the ATLAS detector and the standard reconstruction software. The MC samples we used are shown in Table 4.1

?

KLFitter Reconstruction

The Kinematic Likelihood Fitter (KLFitter)[55] is a general tool for kinematic ttbar event reconstruction using a likelihood approach. The tool is independent of the physics process (and in principle also independent of the experiment). The kinematic fit is performed for a given event topology, i.e. physics process. The KLFitter maximizes a likelihood function with respect to the given parameters and constraints. Typical parameters are the energies of jets and charged leptons, which are varied in the fit. Constraints are typically given by Breit-Wigner distributions around the invariant mass of a decay vertex (for example the invariant mass of the W boson decaying into two quarks). The KLFitter does permutation of the jets to set b-quark on the top quark leptonic decay side, the b-quark on the top quark hadronic decay side, and light quarks from the W-boson decay. The KLFitter is used with the constraint of the mass of top quark.

In the KLFitter, B-tagging can be used to improve the reconstruction efficiency. There are currently three different methods to deal with b-tagging within the KLFitter: two veto methods and a weighting method using a particular working point.

The two veto methods are the simplest way to make use of b-tagging information by vetoing jet permutations in which a b-tagged jet is in the position of a parton coming from the decay of the hadronic W boson. Naturally, events with more than two b-tags cannot be handled in this way.

A more sophisticated way of using b-tagging information is possible by choosing a particular working point with an efficiency (between 0 and 1) for the tagging of b-quarks and a rejection (larger than 1) of light quarks. Efficiency and rejection can be different for every jet when b-tagging scale factors are taken into account. The following term is multiplied to the event probability:

?p=(��(�� ,�� b��_had was b-tagged@(1-��),�� b��_had was not b-tagged))?(��(�� ,�� b��_lep was b-tagged@(1-��),�� b��_had was not b-tagged))? (��(1/R,�� q��_1 was b-tagged@(1-1/R),q_1 was not b-tagged))?(��(1/R,�� q��_2 was b-tagged@(1-1/R),q_2 was not b-tagged)) (4.1)

The related top quarks performance with and without b-tagging are shown in Fig 4.1and Fig 4.2 below. The detail performance plots can be seen from the common KLFitter plots web page[56].

Fig 4.1 Object performance of official KLFitter without b-tagging

Fig 4.2 Object performance of official KLFitter with b-tagging

An additional distinction of up and down quark from hadronic decay side is used by reweighting the event probability with the pt and tagger weight distributions of jets:

?p = p_(p_t)^b (p_T^jet1 )?p_(p_t)^b (p_T^jet2 ) ��?p��_(p_t)^u (p_T^jet3 ) ��?p��_(p_t)^d (p_T^jet4 ) ��?p��_tagweight^b (tagweight^jet1 ) ��? p��_tagweight^b (tagweight^jet2 ) ��?p��_tagweight^u (tagweight^jet3)��?p��_tagweight^d (tagweight^jet4) (4.2)

The event probability of KLF reconstructed events are shown as in Fig 4.3.

Fig 4.3 The event probability of KLFitter reconstruction in electron and muon channel

The W+jets background estimation

The W+jets background are estimated by the W-charge asymmetry method, which is based on the fact that the W boson production at the LHC is charge asymmetric: more W+ bosons are produced than W- bosons. Theoretically, the ratio of cross-sections of W+ and W- are relatively well understood:

r= (��(pp��W^+))/(��(pp��W^-)) (4.3)

So, if we further assume that W+ jets is the dominant source of charge asymmetry in the data after applying the same event selection as for a single lepton+jets analysis, the total number of W+jets can be estimated from the formula:

N_(W^+ )+N_(W^- )=(N_(W^+)^MC+N_(W^-)^MC)/(N_(W^+)^MC-N_(W^-)^MC ) (D^+-D^- )=(r_mc+1)/(r_mc-1) (D^+-D^- ) (4.4)

Where D^+ (D^-) are the numbers of events selected from data with a positively (negatively) charged lepton, the r_mc is the asymmetry ratio measured from the Monte Carlo simulation. Based on this charge asymmetry method and the common top quark selection , the ATLAS Top group provide the W+jets normalization scale factor to scale the W+jets event in Monte Carlo to the data level. The Scale factors for different jet bins are shown in table with systematic uncertainties:

Muon Electron

Numjet Pre-tag Tagged Pre-tag Tagged

1 jet 1.05 +0.09/-0.07 (+9%/-7%) 1.05 +0.26/-0.23 (+25/-22%) 0.98 +0.09/-0.08 (+9/-8%) 0.98 +0.28/-0.27 (+28/-27%)

2 jet 0.97 +0.07/-0.06 (+7%/-6%) 0.97 +0.13/-0.12 (+13%/-12%) 0.88 +0.09/-0.06 (10%/-7%) 0.88 +0.16/-0.15 (18%/-17%)

3 jet Exclu 0.89 +0.07/-0.06 (+8/-7%) 0.89 +0.12/- 0.11 (+14/-12%) 0.81 +0.10/-0.08 (+12%/10%) 0.81 +-0.14

(+/-18%)

4 jet Exclu 0.95 + 0.11/-0.10 (+11%/-10%) 0.95 +0.17/-0.15 (+17%/-15%) 0.83 +/-0.10

(12%) 0.83 +0.14/-0.16 (+17%/-19%)

3 jet Inclu 0.90 +0.08/-0.06 (+9/-7%) 0.90 +0.12/-0.11 (+14/-12%) 0.81 +0.09/-0.07 (+11/-9%) 0.81+/-0.14

(17%)

4 jet Inclu 0.94 +0.10/-0.09 (+11%/-10%) 0.94 +0.16/-0.14 (+17%/-15%) 0.83 +0.11/-0.09 (+13/-11%) 0.83+/-0.14

(17%)

5 jet Inclu 0.90 +0.16/-0.14 (+18/-16%) 0.90 +0.22/-0.20 (+24/-22%) 0.82 +0.21/-0.15 (+25%/-18%) 0.82 +0.24/-0.20 (+29/-24%)

Table 4.2 W+jets normalization scale factor at different jets bin

The QCD background estimation

The QCD background are estimated by a data-driven method which is called matrix method[54]. The matrix method is based on selecting two categories of events, using loose and tight lepton selection. For the semi-lepton channel, the number of events which have one lepton passed loose selection can be written as:

N^loose=N_real^loose+N_fake^loose (4.5)

Where N_real^loose and N_fake^loose are the number of events with real and fake lepton that pass loose cut. The number of events which have one tight lepton can be written as:

N^tight=��_real N_real^loose+����_fake N��_fake^loose (4.6)

Where N^tight is the number of total tight lepton events that can be divided into the fake lepton events and real lepton events passed tight selection. The ��_real and ��_fake are the efficiencies of the tight lepton events selected from loose events.

��_real= (N_real^tight)/(N_real^loose ) ,��_fake= (N_fake^tight)/(N_fake^loose ) (4.7)

Where the N_real^tight and N_fake^tight are the numbers of events which selected by tight cut from the real lepton events and fake lepton events.

The QCD events as so called the fake events pass the tight selection can be represent as :

N_fake^tight= ��_fake/(��_real-��_fake )(N^loose ��_real-N^tight) (4.8)

The efficiency �� �š�_real is measured using tag and probe method on Z boson decaying to two lepton samples, while the ��_fake is measured in same samples from control regions. To have a high precision in the top physics. The selection for the control samples are the same as the selection used for top physics as described in chapter 0.

Event yield and control plot

The numbers of expected and observed data events in each channel after selection are listed in Table 4.3 and Table 4.4. The data agrees with the expectation well. In theose tables, the numbers of remaining signal and background are normalized to 4.7 fb-1 for Monte Carlo simulation. The control plots for electron and muon channel are shown below:

Fig 4.4 The lepton pt for pre-tag(a), 1-btagged(b), reconstructed(b) in electron channel ; pre-tag(d), 1-btagged(e), reconstructed(f) in muon channel.

Fig 4.5 The jet pt for pre-tag(a), 1-btagged(b), reconstructed(b) in electron channel ; pre-tag(d), 1-btagged(e), reconstructed(f) in muon channel.

Fig 4.6 The E_T^miss for pre-tag(a), 1-btagged(b), reconstructed(b) in electron channel ; pre-tag(d), 1-btagged(e), reconstructed(f) in muon channel.

Table 4.3 the event yield for pre-tag, 1 b-tagged, and reconstructed in electron channel

Table 4.4 the event yield for pre-tag, 1 b-tagged, and reconstructed in muon channel

?

?

The electron identification study

In the ATLAS experiment, the identification(id) of electron is very important to the study of the detector performance and almost all physics with electrons. In this chapter , the measurement of electron id efficiency by using a Tag and probe method in W boson decays to electron and neutrino is described .

The electron identification and identification efficiency

The identification of electron base on the information from the ATLAS calorimeter, tracker and the combination of them which can provide good separation between the electrons and jets corresponding to the fake electron. There are three kinds of s operating point according to the identification efficiency of the electron and jet rejection: The loose, medium and tight selection, each tighter operating point is a subset of the looser operating points[39][40]:

Loose: use the shower shape variables of the second calorimeter layer and hadronic leakage variables.

Medium: first calorimeter layer cuts, track quality requirements and track-cluster matching are added at this level beside the loose.

Tight: adds Et/pt, b-layer hit requirements and the particle identification potential of the TRT beside the medium.

The details of the three operating points with cut of variables are shown as in table 2.1.

During the run in 2011 of a few 1033 fb-1/s, in order to cope with the trigger rates and the pile up, three further re-optimized selection are redefined as the loose++, medium++, and tight++ operating points.

Loose++: rEta, rHad, wEta2, Eratio, wstot in Shower Shapes, nPix+nPixOutliers>=1 and nSi+nSiOutliers>= 7, loose track-cluster matching in eta (?��< 0.015)

Medium++: all loose++ shower shape cuts but at tighter values, tighter delta-eta track-cluster matching (?��<0.005) and d0 < 5mm, stricter bLayer and Pixel hit requirements, ( bL + bLOutlier > 0 for |��|< 2.01 and nPix+nPixOutliers > 0; nPix + nPixOutliers > 1 for |��| > 2.01), loose TRT HT Fraction cuts, tighter shower shapes for |��| > 2.01

Tight++: cuts on shower shapes from medium++ at the same or tighter values. added e/p and ?�� track cluster matching cuts and and d0 < 1mm, added conversion bit. bL+bLOutliers > 0 for all eta.

Table 5.1 Variables used for the loose, medium and tight electron selection

Table 5.2 Variables used for the loose++, medium++ and tight++ electron selection

The Event selection and reconstruction

The W��ev channel are selected follow the cut below:

Events must pass the GoodRun list for the 2011 data

E_T^miss clean,

Remove dead OTX and LAr hole from dead FEBs

Primary vertex

The electron are cut at electron pt>15GeV, |eta|<2.47

The E_T^miss should be MET_Lochad_Topo_Et>30GeV.

The W transfer mass should be more than 50GeV

At the same time , the track quality (SCTHits>=7, PixHits>=1) cut are required to improve the agreement between data and mc. After the pre-selection of the object, the comparison of data and Monte Carlo are show in Fig 5.1

Fig 5.1 Data/MC comparison of electron pt(a), eta(b), phi(c), W transfer mass(d)

Tag and probe method

The efficiency of those selection point can be get by a so called ��tag and probe�� method. The T&P method aims to select a pure and unbiased sample of electrons, called probe electrons, using some selection cuts, called tag cut on other objects in the event. the definition of the ID efficiency :

��(e)= (N(probe electrions passed ID cuts))/(N(all probe electrions)) (5.1)

The procedure of Tag and Probe is shown as in fig . The channel is W��ev. By using a tag cut on the Etmiss object from the neutrino side , we firstly selected a pure probe electrons sample. Then create data-driven isolation template to subtract background from signal region for probes and probes with ID cut. The data-driven background templates are got in the probe container, by reversing the ID cuts. After the background subtracting, the identification of electron for the six operating point (loose, medium, tight, loose++, medium++, tight++) are given.

Fig 5.2 The tag and probe method

Met isolation cut: This cut is defined as the delta phi between the Etmiss and the closest jet of it satisfies a certain cut value. As this cut can be significantly separate the signal channel w��e�� from all stored data .

Fig 5.3 Met isolation in data and mc

Template subtraction : The template used to subtract background are usually the isolation variable. As the isolation value can distinguish the signal electron and fake electron obviously. The method is called side-band method, cut on the Etcone40/Pt, as shown in Fig 5.4, the cut with less than 0.2 will divide the signal region and background region.

Fig 5.4 Etcone40/ptele for probe electron and background electron

After the MET isolation cut and the Template subtraction . The measured electron ID efficiencies by Tag and Probe method are shown in the tables below:

Table 5.3 loose, medium, tight efficiency as a function of electron pt

Table 5.4 loose, medium, tight efficiency as a function of electron pt

Fig 5.5 loose, medium, tight efficiency as a function of electron pt (a)and eta(b), loose++, medium++, tight++ efficiency as a function of electron pt(c) and eta(d)

electron id efficiency measurement dependence on trigger

To get a pure and un-bias electron sample, the trigger used for analysis is the Met trigger . In 2011 data , according to different data period , 4 triggers are used , they are list and summarized in the table.

Fig 5.6 MET Trigger status for the 2011 data

A study is needed to check the dependence on different triggers in the electron id measurement. Hereby, the comparison of electron performance and efficiencies between the EF_xs60_noMu_L1EM10XS45 and EF_ e13_etcut_xs60_noMu trigger, EF_ e13_etcut_xs60_noMu trigger and EF_e13_etcut_xs60_noMu_dphi2j10xe07 are shown as below:

Fig 5.7 The comparison of electron pt(a), eta(b), Etmiss(c) and W transfer mass(d) between EF_xs60_noMu_L1EM10XS45 and EF_ e13_etcut_xs60_noMu trigger

Fig 5.8 Comparison of electron pt(a), eta(b), Etmiss(c) and W transfer mass(d) between EF_xs60_noMu_L1EM10XS45 and EF_e13_etcut_xs60_noMu_dphi2j10xe07

Fig 5.9 The comparison of electron ID efficiency as a function of eta and pt between EF_xs60_noMu_L1EM10XS45 and EF_ e13_etcut_xs60_noMu trigger

Fig 5.10 The comparison of electron ID efficiency as a function of eta and pt between EF_ e13_etcut_xs60_noMu and EF_e13_etcut_xs60_noMu_dphi2j10xe07

The vertex dependence on this study

The track primary vertex reconstruction will be effected by the pile-up at LHC as the high luminosity of it. So the vertex dependence study is an important way to study the pile-up effect in electron id measurement.

In this section, a cross check work on the study is described. Following the common procedure of id efficiency measurement , the dependence on primary vertex in the id efficiency measurement are shown as below.

Fig 5.11 The electron id efficiency as a function of the primary vertex number

Fig 5.12 The electron id efficiencies as a function of the electron eta with different vertex number

Conclusion

In this chapter, the electron identification efficiency measurement using the tag and probe method are described with the 2011 data. Comparing the measured efficiencies between the old menu(loose, medium, tight) and ++menu(loose++, medium++, tight++) . The efficiency of loose is similar as the loose++, medium++ is lower than the medium, while the tight++ is higher than the tight.

The study of the dependence on the triggers shows a good agreement for the EF_ e13_etcut_xs60_noMu and EF_e13_etcut_xs60_noMu_dphi2j10xe07, and 2% difference in the whole electron pt region for the EF_xs60_noMu_L1EM10XS45 and EF_e13_etcut_xs60_noMu. The study of the dependence on the primary vertex shows that the difference is obviously at the lower pt region.

?

Top polarization study using the unfolding technical

The Bayesian theorem[58]

For one event in an experiment, the value of a physics phenomenon we observed is written as E, and it is a value that was affected and distorted by the detector acceptance. So it may comes from the whole range of truth value bins(Ci, i=1,2,3,4��nc) in the truth distribution. The probability of the observed value E which came from the truth value equal to Ci can be written as:

p(C_i��E)=(p(E��C_i )p(C_i))/(��_(l=1)^(n_c)?��p(E��C_l )p(C_l)��) (6.1)

Where the p(E��C_i ) is the probability of the Ci transfer to the value E, and p(C_i) is the probability of the Ci in the whole range of truth value bins .Then assuming we observed n(E) events with value E , the account of the truth events with value Ci which cause the n(E):��

n ?(C_i )=n(E)p(C_i��E) (6.2)

The observed value also has several possibilities which can be written as Ej(j=1,2,3,4��ne) . Then the formula(6.1) can go further to describe the probability of the observed value Ej which came from the truth value equal to Ci:

p(C_i��E_j )=(p(E_j��C_i )p(C_i))/(��_(l=1)^(n_c)?��p(E_j��C_l )p(C_l)��) (6.3)

With ��_(i=1)^(n_c)?��p(C_i)�� = 1, ��_(i=1)^(n_c)?��p(C_i |E_j)�� = 1, and ��_i�ԡ�_(j=1)^(n_E)?��p(E_j |C_i)�� gives the efficiency of detecting the Ci in all the possible observed values

After Nobs events are measured by our experiment, we get a distribution of n(E) = {n(E1), n(E2), . . ., n(EnE)}. The formula(6.2) can be written as:

n ?(C_i )|obs=��_(j=1)^(n_E)?��n(E_j)�� p(C_i��E_j ) (6.4)

Taking into account the ��_i, The total event should be:

n ?(C_i )=1/��_i ��_(j=1)^(n_E)?��n(E_j)�� p(C_i��E_j ) (6.5)

The formula can be written as:

n ?(C_i )=1/��_i ��_(j=1)^(n_E)?��M_ij n(E_j)�� (6.6)

With a matrix M_ij:

M_ij=(p(E_j��C_i )p(C_i))/([��_(l=1)^(n_E)?��p(E_l |C_i)��][��_(l=1)^(n_c)?��p(E_j��C_l )p(C_l)��]) (6.7)

using this matrix, we can invert the measured distribution back to the truth distribution(unfolded distribution).

In the measurement , the measured value n(E_j), should minus the events come from the background, and the n(C_i)should plus the events which not observed by experiment(missed). This procedure can be shown as below

Fig 6.1 A diagram of the unfolding procedure

The matrix is produced from the signal Monte Carlo(mc) sample, so the difference between the data and simulated mc will cause the bias of the unfolded distribution. A iteration procedure is provided to improve the unfolded distribution:

unfolding the observed distribution with the matrix we produced from the mc

replace the truth distribution of Monte Carlo with the unfolded distribution;

producing new matrix with this new distribution

following the first step with this new matrix.

The procedure is stopped until the bias is small enough between the unfolded distribution and the truth distribution of last time.

The More iterations, the smaller is the bias, but the bigger uncertainty from the matrix , because each iteration adds new uncertainties in to the measurement.

If any MC will result in same migration probability matrix with infinite number of events, the iteration starting from different truth distribution will converge to the same measurement, without error divergence as shown in Fig 6.2(a)

Fig 6.2 Measured physics result from two different matrices with (a)same migration probability and (b) different migration probability.

However, the migration probability matrices are statistically limited, the final measurements will be different and divergent with more iterations( explained by the uncertainty of the matrix) from different migration probability matrix as shown in Fig 7.1Fig 6.2(b)

Above all, based on these two issue, the common procedure , taking different matrixes, for one input data , just consider the uncertainty from matrixes, an example of a unfolded physic value can be shown as in fig6.3, the input truth value equal to 0.2. And unfolded results show that we can use two matrices to decide the iteration number at the cross point. While also consider the error bar, so the iteration number should be less than the one for cross point.

Fig 6.3 Unfolded result from different matrices

The W polarization study

There are three helicities ,-1,0,1 for the W boson as a spin =1 particle , as shown in the sketch fig 6.1. Those helicities are so called the left-handed, longitudinal and right-handed, with the probabilities written as FL, F0, and FR, respectively

Fig 7.1 Sketches of angular momentum conservation in t ��W+b decay in the top rest frame.

In the SM, a massless particles must be left-handed, so the right-handed W bosons is not allowed assuming the b quark is massless. According to angular momentum conservation ,the Standard Model predict the three values accurately from the formula(7.1).

{��(F_0=(M_t^2)/(M_t^2+2M_W^2 )=0.703+0.002��(M_t-175)@F_L=(��2M��_W^2)/(M_t^2+2M_W^2 )=0.297-0.002��(M_t-175)@F_R=0.000 )�� (7.1)

With the QCD and electroweak interaction correction, SM provides the three values as shown in table below[59]:

Table 7.1 the fraction of w helicities in SM

In experiment, these three values can be extracted from the angular distribution of the lepton comes from the W boson decay. The angular distribution follow the equation 7.2 below[60]:

1/N dN/(d cos?�� )=3/2 [��F_0 (sin?��/��2)��^2+��F_L (��1-��?cos?�� /��2)��^2+��F_R (��1+��?cos?�� /��2)��^2 ] (7.2)

Where the angle �� is the angle between the charged lepton direction in the W boson rest frame and the W boson direction in the top quark rest frame. The distribution of the cos?�� is shown as below:

Fig 7.2 Angular distribution of Equation(7.2) in the SM

.

The Scan technical in unfolding method

Based on the discussion in chapter 6, the observed join angle distribution H^obs (cos?��) usually differ from that predicted with physics H^phys (cos?��):

H^obs (cos?�� )=��?��R(cos?��;C_1)�� H^phys (C_1)dC_1 (7.3)

To visualize the observed distributions directly and apparently compare with known physics shapes, the response matrix M(cos?��;C_1) is needed to unfolding the observed distribution to the truth level:

H^phys (cos?�� )=��?��M(cos?��;C_1)�� H^obs (C_1)dC_1 (7.4)

For w polarization, the uncertainty has been much smaller. Using a MC sample much close to data to unfold the data is desired. The procedure is then simplified and works as below:

Generated many matrixes and reconstructed level distribution by mixing 3 Protos samples(only F0, FL, FR helicity) and SM sample (to increase the statistic).

Compare the input data distribution with all those reconstructed distribution at different W helicity point by compute the P-value(match probability) as shown in Fig 7.3.

The best agreement (best P-value ) from one of W helicity point will be used as the nominal matrix , the step are so called scan technical step.

Then using the matrix produced at this W helicity point to unfold the input data and get the unfolded distribution .

The measured W helicity get be extracted from the unfolded distribution by the equation

Fig 7.3 The match probability between data and mc samples at different W helicity point

This procedure are called scanning technical, which is confirmed to be works as expected by validate it in both data and mc test as the section 7.2.

the validation of the unfolding technical

Binnig choice

Binnig choice is studied in this part.

In order to choose a compromised binning to avoid the high correlation between bins or too few events in any bin.

Resolution is estimated with simulated MC@NLO samples.

For the cos?�� value in one bin, the RMS of reconstruction level minus truth level is taken as the resolution of angle in this bin.

The resolution distributions are shown as in Fig 7.4

In general the bin width is taken to be half to two times of resolution.

Finally equal-width six bins as the final binning is determined.

Fig 7.4 The resolution in electron and muon channel

Improvement by iteration

The other physics (e.g. spin correlation) prevent us to find the ��real best�� matrix, so after the matrix is determined, we need the iteration to reduce the effects of other physics. The test ere is using the SM matrix to unfold the no spin correlation sample. The measured W-helicity are shown in Fig 7.5 This test shows that the iteration can improve the measurement to reduce the effects of other physics.

Table 7.6 Differential results in electron and muon channel

Inclusive result

Aiming to get the W-helicity precision, the inclusive measurement is needed. Following the Scan technical described in the section 7.1, we produced response matrixes of all different W-helicity precision by mixing the SM sample and three proto samples(F0, FL, FR), then compute the p ? Value between the distribution of Data and each mixed sample. The scanning goes over all matrixes and find the ��best�� matrix with the highest p ? Value (match probability). The inclusive result is got by unfolding the Data minus background with the ��best�� matrix. The statistical uncertainty is got by fluctuate the cos�� distribution of Data minus backgroud (1000 times). The unfolded cos�� distributions with static error are shown in Fig 7.13, the fitted resulst are shown as in Table 7.7

Fig 7.13 The unfolded distribution by using the scan technical in electron(left) and muon(right) channel.

Table 7.7 Measured fraction of three W helicities with stat and sys errors in electron and muon channel by unfolding method.

Systematic uncertainties

For each systematic effect the analysis is re-run with the variation considered to estimate the one standard deviation (��) change in each bin of the variable of interest due to the specific effect. The varied distributions are obtained for the upward and downward variation of the variable of interest for each effect and for each channel separately. If the direction of the variation isn��t defined (as in the case of the estimate resulting from the difference of two models), the estimated variation is considered as having the same size in both the upward and the downward direction. The different estimates are assumed to be uncorrelated with each other unless specified. A list of systematic uncertainties considered in this analysis is given below. Ensemble test is used to each uncertainty.

Signal modelling Compare predictions from the MC@NLO and POWHEG (interfaced to HERWIG) generators.

Parton showering The effect of parton showering modeling was tested by comparing two POWHEG samples inter faced to HERWIG and PYTHIA, respectively.

ISR/FSR By using Monte Carlo samples with ISR/FSR status varied, pseudo-data are produced. Passing these data through the measuring procedure. The maximum measured variation is taken as systematic.

PDF sets The impact of the choice of parton density functions (PDF) in simulation was studied by reweighting the events by different NLO PDF sets and taking half of the total variation interval as the uncertainty.

Top Mass Several Monte Carlo samples produced with different top mass are used, pseudo-data are produced. Passing these data through the measuring procedure, the expectations of the measurements is aligned with the input top mass and fitted with straight line. The lope of the line times the precision of top mass ever known is taken as the systematic error.

Color reconnection The impact of different models of color reconnection was studied by comparing samples simulated with ACERMC using different generator settings. These are the Perugia tune without color reconnection as well as the tune A-Pro and ACR-Pro. The largest deviation between each tune with and without color reconnection was taken as systematic uncertainty.

Jet Energy Scale Jet energy is varied up or down one sigma, which value is related to the �� and pT of the jet. Using these pseudo-data, the measured estimator varies from the nominal one. The difference is taken as the systematic error

Jet Energy resolution Jet energy is smeared and ensample test is performed.

Jet reconstruction efficiency The jet reconstruction efficiency was estimated using minimum bias and QCD dijet events.

Lepton energy scale The possible error from electron energy scale were estimated with a similar way as the procedure for jet energy scale systematic error. MCP Smearing Class is employed to estimate muon energy scale uncertainty.

Lepton energy resolution Lepton energy is smeared in MC and the error is obtained with ensample.

Lepton reconstruction The mis-modelling of muon (electron) trigger, reconstruction and selection efficiencies in simulations were corrected for by scale factors derived from measurements of the efficiency in data. Z��ee, Z��̦� and W��e�� are used to obtain scale factors as functions of the lepton kinematics. The uncertainties are evaluated by varying the lepton and signal selections and from the uncertainty in the evaluation of the backgrounds.

B-tagging scale factors This analysis required at least two b jets tagged in signal. After a scaling in the building of angle distribution, the b-tagging performance in Monte Carlo is corrected to real data. The rest of the uncertainty is then used as the systematic source to estimate the systematic.

MET All the energy scale/smear of lepton and jets have been propagated to MET for consistency. Here the uncertainty from soft jets, Cellout and pileup is estimated.

W+jets shape The uncertainty from the W+jets shape are computed by reweighting the final reconstructed w+jets with a tool based on based on the leading jet pt spectrum

W+jets normalization The uncertainty from the W+jets normalization are computed by the error of normalization scale factor .

W heavy flavor The uncertainty from the W heavy flavor are computed by the error of heavy flavor scale factor .

Fake lepton The uncertainty from the estimation of QCD events are computed by the error of QCD weight from data-driven .

Table 7.8 Systematic sources of uncertainty on W-helicity fractions for the single lepton

?

tt ? spin correlation study

Similarly as the W polarization, the top polarization can be analyzed trough the angular distribution of its daughters , which are also called spin analyzer.

The angular distribution satisfy the equation below[61][64]:

where S is the modulus of the top polarization and ��i is the angle between the direction of particle i in the top quark rest frame and the direction of the top polarization. ��i is the spin analyzing power of this particle, which represent the spin correlation strength with the top quark. The ��i of different final particles are shown in Table 8.1.

Table 8.1 SM spin analyzing power at LO and NLO of top quark daughters[65][62]

Charged leptons and down-type quarks, which are almost 100% correlated with the top spin direction, are optimal spin analyzers. But on the contrary to leptons, d and s-jets cannot be distinguished experimentally from u and c-jets. Therefore, the analyzing power of light jets is the average value ��_jet~ ((1-0.31))?2=0.35. In our analysis, we choose the bjet from the w hadronic decay as the spin analyzer to remove the uncertainty from the average analyzing power of light jets.

Equation 8.1 can be directly used for single top quark production as the top quark are polarized .In the ttbar production, the top and the anti-top spins are correlated, the correlation strength A can be described as :

Where the sigma are the production cross section of a top quark pair with spins up or down .

Similar as the w polarization, the spin correlation can also be measured from the final particle angular distribution[63] :

where ��_1and ��_2 is the angle between the direction of the daughter in the t(t ?) rest frame and the t(t ?) direction in the tt ? center of mass system. i is the spin analyzing power of particle. The Standard Model predict the A value accurately equal to 0.326[63].

Fig 8.1 The angular distribution in SM

After the tt ? semi-lepton events has been selected after event selection and reconstruction, the angular distribution is as below:

Fig 8.2 The angular distribution of cos?����_lepton �� ��*cos��?����_(had_bjet) ��

Validation of unfolding technical in spin study

Binnig choice

Similar procedure as in W polarization study, the bin width is taken to be half to two times of the resolution of physics value, in spin study , the final decision is the 6 equal-width bin distribution.

Fig 8.3 The RMS of the spin angular distribution in electron and muon channel

Improvement by iteration

The test here is using the SM matrix to unfold the no spin correlation sample. The measured spin correlation are shown in Fig 8.4. This test shows that the iteration can improve the measurement with iteration number increase.

Table 8.2 Unfolded distribution in the 6*6 bins of ��X_cos��?����_lepton �� �� and Y_cos��?����_(had_bjet) �� in electron channel.

Table 8.3 Unfolded distribution in the 6*6 bins of ��X_cos��?����_lepton �� �� and Y_cos��?����_(had_bjet) �� in muon channel.

Summary and outlook

In this thesis, the mainly work in my PhD career are shown, which are the electron id efficiency measurement study, the W polarization study and Spin correlation study based on 4.7fb-1 data at ATLAS experiment. By using the common event selection and reconstruction that the ALTAS group recommended, the signal events selected from data have good agreement with the Monte Carlo simulation events.

The electron id efficiency of loose, medium, tight and the++ menu are measured, and the study of dependence on triggers and pile-up shows that the efficiencies at the low electron pt region are more different than other regions.

For the W polarization and Spin correlation study using unfolding method, the differential results show good agreement between the signal angular distribution in real data and the one in Standard Model with the static and systematic errors. The inclusive result of W polarization is also matched well with the SM within the static and systematic errors.

In later 2012 and 2013 , ATLAS has been running at a center-of-mass energy = 8TeV, which will provide a higher statistic to improve the precision of the W polarization and Spin correlation measurement in top physics study.

?