GLEAMOverview. The top-level GlastRelease package is the GLast Event Analysis Machine (GLEAM) whose basic function is to take the "raw", Level 0 data delivered to it in LAT Data Format (LDF data) and convert it to Level 1 data ready for analysis using Science Tools software. For test and development purposes, GLEAM also incorporates input from Source Generators and processes Monte Carlo data using the same Digitization and Reconstruction Algorithms, Transient Data Store, and Persistency and Ntuple Services that will be used to process LDF data, once it becomes available. Outputs. All GLEAM output files are in ROOT format. To perform an event analysis or to generate simulated data requires that you either access the GLEAM application remotely, or download it to your desktop.Gleam is configurable and is capable of producing up to five output files, with one output file each for:
GLEAM Application: Processing Real Data
GLEAM ApplicationPhotons are detected as discrete "events", consisting of the tracks (energy deposit) left by ionizing particles in different parts of the instrument. These "events" need to be classified as incident charged particles or photon-induced particle pairs. The ability to separate charged particles from gamma-ray pair events determines the level of background contaminating the sample. Raw data received from the satellite's downlink telemetry and converted to Level 0 (L0) data is input to the Glast Event Analysis Machine (GLEAM), the reconstruction application that finds charged tracks in the Tracker (TKR) and clusters of energy deposition in the Calorimeter (CAL). Tracks are extrapolated to the Anti-Coincidence Detector (ACD) to determine whether a tile fired along the path of the track. (See Charged Particle Detection vs. Backsplash.) Potential photons, consisting of two tracks in a "vee" (or possibly a single track, for some high energy photons), are written to an output file where they are available for further analysis. Note: At this stage, cuts can be made to reject background events.
Notes:
The LdfEvent package is used to define the LDF-specific TDS classes. The LdfConverter converts the Event inputs to digi outputs: AcdDigi, TkrDigi, CalDigi, and Trigger. (See Graphical Class Hierarchy.) Outputs are then stored in the Transient Data Store (TDS). (See Event.) Observe that the LdfConverter outputs are also sent to the RootCnvSvc, which converts them to ROOT Tree formats (see package RootConvert) and stores them as digiRootData.
ROOTROOT is an I/O and analysis package that allows C++ objects to be stored inside a file. The I/O package is designed to create compact files, and also allows efficient access to the data. The tree structure of ROOT files allows a subset of the branches to be manipulated, thereby reducing the amount of I/O required. A C++ script extracts likely photon events and creates a new truncated ROOT file. ROOT files containing data (whether from the satellite, simulation software, or some other source, e.g., beam test data) all have the same internal structure, allowing I/O and low-level analysis routines to be shared.
After reconstructing the digitized event data, Gleam produces a reconRootData file and a summary ntuple file. (See package AnalysisNtuple and package merit; also see Standard AnalysisNtuple Variables.)
Detector Geometry: Alignment Constants FileNote: An alignment constants file is an xml file, and is a representation of the actual alignment relationships between TKR components with respect to their ideal (i.e., perfectly aligned) relationships. These files also account for relationship discrepancies such as:
Elements subject to alignment:
Example – Alignment Constants File (for simulation):
For a much more detailed introduction Also see: Basics of: Event Selection,TkrRecon, and the CalorimeterEvent SelectionBackground rejection performs the function of particle identification, determining whether the incoming particle was a photon.With the high charged-particle flux in space, shower fluctuations in background interactions can mimic photon showers in non-negligible numbers. Cuts are applied to the events to suppress the background. First, all events are reconstructed, but only events in which none of the ACD tiles fired are considered. The cut eliminates 90% of triggered events. Most of the rejected events consist of charged particles, but a few legitimate photons are also rejected if, for example, one of the particles in the shower exits the detector through the sides or top, and fires an ACD tile. Note: This cut could be applied before any reconstruction, but reconstructing all events is useful in comparing the data with simulations. Next, the reconstructed tracks are tested for track quality, formed from a combination of goodness-of-fit, length of track, and number of gaps on the track. Also, an energy-dependent cut removes events with tracks that undergo an excessive amount of scattering. Finally, it is required that there be a downward-going vee in both views, and that both tracks in the vee extrapolate to the calorimeter. (This will introduce some inefficiency for high-energy photons, and for highly asymmetric electron-positron pairs. Vees with opening angles that are too large, >60o, are rejected. Such vees generally come from photons with energies below the range of interest.) TKR TowerA tracker tower consists of strips of silicon detectors arranged in pairs, with each element of the pair providing a separate measurement in one direction (X or Y) perpendicular to the tower axis. Reconstruction is initially done in separate Z-X and Z-Y projections. Projections are associated with each other whenever possible by matching tracks with respect to length and starting positions. Multiple Coulomb Scattering (MS). In the absence of interactions, particle trajectories would be straight lines. However, the converter foils needed to produce the interactions (as well as the rest of the material in the detector), cause the particle to under go multiple Coulomb scattering (MS) as they traverse the tracker, complicating both pattern recognition (i.e., finding the particles) and track fitting (determining the particle trajectories), particularly for low-energy electrons and positrons. Pattern recognition must therefore be sensitive to particles whose trajectories depart significantly from straight lines. The presence of multiple scattering also has implications for the fitting procedure. Without MS, deviations from a straight line are due solely to measurement errors, which occur independently at each measurement plane and are distributed about the true straight track. In the presence of MS, there are real random deviations from a straight line,and these deviations are correlated from one plane to the next. For example, if an individual particle scatters to the right at one plane, it is more likely to end up to the right of the original undeviated path than to the left. Covariance Matrix. These correlations can be quantified in a covariance matrix of the measurements, which is calculated from the momentum of the particle and the amount of scattering material between the layers. The dimension of this matrix is the number of measurements. Solving for the track parameters in terms of the measurements involves inverting this matrix. If there is no MS, the matrix is diagonal, and the inversion is trivial; MS introduces off-diagonal elements, which complicates the inversion. Kalman Filter (KF). The Kalman Filter technique can be useful in both stages of particle reconstruction. First, the initial position, direction, and energy of the particle is estimated. The energy of the particle is based on the response of the calorimeter, and the starting point and direction are derived by looking for three successive hits that line up within some limits. From this starting point, the track is extrapolated in a straight line to the next layer. Using the estimate of the energy and the amount of material traversed, a decision is made whether the hit in the next layer is within a distance from the extrapolated track allowed by the expected multiple scattering and the uncertainty of the initial estimate. If so, the hit is added to the track, and the position and direction of the track at this plane are modified, incorporating the information from the newly added hit. The modified track is now extrapolated to the next plane, and the process continues until there are no more planes with hits. All the correlations between layers have been properly taken into account but, at each step, only MS between two successive planes need be considered, and the covariance matrix required is that of the parameters. The track parameters at the last hit have now been calculated using the information from all the preceding hits. To get the best estimate of the initial direction of the photon, however, it is necessary to know the parameters at the first hit, close to the point where the photon converts to an electron-positron pair. Smoothing. At this point, smoothing is applied, i.e., the KF is "run backwards" from the last plane to the first, using the appropriate matrices. After smoothing, the track parameters, and their errors have been calculated at each of the measurement planes; in particular, the first plane. CalorimeterA high-energy photon traversing material loses its energy by an initial pair-production process The calorimeter provides information about the total energy of the shower, as well as the position and direction , and shape of the shower; or of the penetrating nucleus or muon. It consists of eight layers of ten CsI(TI) crystals ("logs") in a hodoscopic arrangement, i.e., alternatively oriented in X and Y directions, to provide an image of the electromagnetic shower. It is designed to measure photon energies from 20 MeV to 300 GeV and beyond. To comfortably contain photons with energies in the 100-GeV range requires a calorimeter at least twenty radiation lengths thick. However, weight constraints forced limiting Fermi's calorimeter to only ten radiation lengths in thickness; thus, it cannot provide good shower containment for high-energy photons. The mean fraction of the shower contained at 300 GeV is about 30% for photons at normal incidence, at which the energy observed is very different from the incident energy; the shower development fluctuations become larger, and the resolution quickly decreases. Correcting for Shower Leakage: Method 1 (Shower-profiling) – fit a mean shower profile to observed longitudinal profile. The profile-fitting method proves to be an efficient way to correct for shower leakage, especially at low-incidence angles when the shower maximum is not contained. After the correction is applied, the resolution (as determined from a simulation) is 18% for on-axis TeV photons – an improvement by a factor of two over the result of correcting the energy with a response function based on path length and energy alone. Correcting for Shower Leakage: Method 2 (Leakage-correction) – use the correlation between the escaping energy and the energy deposited in the last layer of the calorimeter. The last layer carries the most important information concerning the leaking energy: the total number of particles escaping through the back should be nearly proportional to the energy deposited in the last layer. Both correction methods significantly improve the resolution. The correlation method is more robust, since it does not rely on fitting individual showers; but its validity is limited to relatively well-contained showers, making it difficult to use at more than 70 GeV for low-incidence-angle events. Three methods are used to measure the energy deposit in the calorimeter:
HalfPipe
Also see:
Note: Input to L1Proc must be one of the currently supported types:
L1 Processing
Create DigisL1 Processing begins with event data output from the HalfPipe (i.e., event data in EBF format). The ldfReader presents the event data to the LdfConverter, which then writes the Acd, Tkr, and Cal digis to the event TDS. See:
Also see: MOOD/MOOT (Configuration Database)Note that output from the HalfPipe is also input to the MOOD/MOOT configuration database. This HalfPipe output is a code that enables MOOT to retrieve all configuration-related files active at the time of the event from the MOOD database. See: Also See: Get Calibration from Database and Analyze Charge Injection (LCI)
|
|
![]() |
See:
Finally,
ACD Analysis. The ACD has been measured to be ~99.97%efficient for minimum-ionizing particles; so the most interesting thing about the ACD is where it isn't, meaning that, because of gaps in the ACD coverage, charged tracks may fail to produce a signal in any tile. The ACD Analysis identifies these gaps to remove sources of background.
See: ACD Analysis.
and
(See TkrRecon --> Major Algorithms, Access to the "interesting" quantities,
and Services.)
See Introduction to the Standard Analysis of LAT Data:
Also see:
The merit package implements meritAlg, which is the algorithm that:
See:
Also see:
Owned by: |
Last updated by: Chuck Patterson 07/16/2010 |