The QGP was discovered at the CERN SPS and studied at the dedicated RHIC collider at BNL. The LHC experiments have opened new avenues in our understanding of the properties of this state of hot and dense strongly interacting matter, observing the creation of hot hadronic matter at unprecedented temperatures, densities and volumes and exceeding the precision of all relevant measurements performed over the past decade. The proton-lead run in 2013, that was meant to be a ‘control experiment’ for the understanding of heavy ion collisions, produced a wealth of results due to the observation of unexpected possible collective effects.
To build on this excellent performance, the ALICE collaboration is now seeking to upgrade the detector and enhance its physics capabilities through a significant increase of the luminosity that the experiment will be able to deal with. The upgrade strategy is formulated under the assumption that, after the second long LHC shutdown in 2018, the luminosity with lead beams will gradually increase to an interaction rate of about 50 kHz.
ALICE is focusing on signatures that do not exhibit topologies which can easily be triggered on and therefore plans to read all events into the online system at data rates exceeding 1TB/s. The new ALICE detector will be able to record a total of 1011 lead-lead interactions at a rate of 50 kHz – about two orders of magnitude higher than the current readout rate capability.
Figure 1: Schematic of the ALICE detector.
Besides the partial redesign of the readout electronics to cope with the increased readout rate, the planned upgrades include a new beam pipe with a smaller diameter, a new Inner Tracking System (ITS), a vertex tracker for forward muons, the upgrade of the Time Projection Chamber with Micropattern detectors, the upgrade of the forward trigger detectors and the upgrade of the online and offline system.
The upgraded ITS will improve the track position resolution at the primary vertex by a factor of 3 or even larger with respect to the present detector. It features a standalone tracking efficiency comparable to what can be presently achieved by combining the information of the ITS and the TPC. Monolithic pixel sensors and special carbon fiber support structures will realize the ITS upgrade as an ultra-light 7 layer/10GigaPixel detector that will allow a significant boost of the tracking performance.
The Muon Forward Tracker, a vertex tracker consisting of several discs in the forward region and covering the ALICE muon arm, will also be realized with monolithic silicon pixel sensors, providing the capability to identify secondary vertices and improving mass resolution.
The high collision rate does not allow the TPC to use the traditional gating grid technique for the blocking of ions. Micropattern detectors that have intrinsic ion blocking capability will therefore be employed for the TPC, which allows a continuous readout of the detector.
The massive amount of data arriving from the detector and the need for compression and partial reconstruction requires a very powerful data processing system, called O2, that combines functionalities that are presently split into ‘Online’, ‘Offline’ and the High Level Trigger. This system will then compress and partially reconstruct the events in order to allow efficient storage of all the events.
This ALICE upgrade programme has been approved by the LHCC and is now being detailed in a series of Technical Design Reports. The first one, for the new Inner Tracking System, has already received the final approval at the last meeting of the Research Board and the others are following.
The ALICE upgrade will significantly increase its rate capability and realize a monolithic pixel tracker and a continuously sensitive TPC, two particle detector concepts that have long been dreamed about and that will boost ALICE experimental capabilities for its exciting physics scope. With such an instrument at hand, ALICE will investigate heavy ion collisions after LS2 and beyond LS3 . in the typical pattern of one month per year during the LHC operation.
ITS Upgrade Plans
The spcification and layout of the new detector, the R&D activties and technical implementation of the main components, and the detector and physics performance are presented in the Technical Design Report (TDR) on the Upgrade of the Inner Tracking System, which was submitted to the LHCC in December 2013 and approved by the CERN Research Board in March 2014.
The New ITS consists of 7 concentric layers of pixel detectors. Monolithic Active Pixel Sensors (MAPS), based on a 0.18µm CMOS process, have been selected as the technology for all layers.
The ITS upgrade plan is mainly based on building a new silicon tracker with greatly improved features in terms of determination of the impact parameter (d0) to the primary vertex, tracking efficiency at low pT and readout rate capabilities. This new silicon tracker will allow ALICE to measure charm and beauty production in Pb-Pb collisions with sufficient statistical accuracy down to very low transverse momentum, measure charm baryons and perform exclusive measurements of beauty production. These measurements are essential in order to understand the energy loss mechanism and thermalization of heavy quarks in the QGP state. In addition, the new ITS will also play a key role for the measurement of thermal photons and low-mass dileptons. Measuring these characteristics of the Quark Gluon plasma will allow us to better understand Quantum Chromodynamics as a genuine multi-particle theory.
Different options explored for the structure of the upgraded Inner Tracking System based on the work of the four working groups of the ITS Upgrade Project.
The ITS upgrade will allow new measurements on charm and beauty production which are needed in order to answer the above physics questions. More specifically, it will allow the study of the process of thermalization of heavy quarks in the medium by measuring heavy flavour charmed and beauty baryons and extending these measurements down to very low pT for the first time (D mesons, Charm and beauty baryons, Λc and Λb. Baryon/meson ratios and finally the elliptic flow of charmed and beauty mesons and baryons). Secondly, it will give us a better understanding of the quark mass dependence of in-medium energy loss by measuring the nuclear modification factors RAA of the pT distributions of D and B mesons. The upgraded ITS will also offer a unique capability of measuring the beauty quarks (via D0 - Κπ , J/ψ – ee). Moreover, it will improve the measurement of single displaced electrons and finally it will improve the beauty decay vertex reconstruction. Finally, the upgraded ITS will give us the chance to characterize the thermal radiation coming from the QGP and the in-medium modification of hadronic spectral functions as related to chiral-symmetry restoration (particularly for the ρ –meson).
The mechanical structure for the upgraded ITS was designed in order to meet the physics requirements of the new detector
The ITS upgrade project requires an extensive R&D effort by our researchers and collaborators all over the world on cutting-edge technologies: silicon sensors, low-power electronics, interconnection and packaging technologies, ultra-light mechanical structures and cooling units. The new technology enhances the ALICE resolution of the charged track impact parameter by almost a factor of three. This factor in the bending plane is the result of lower material budget, reduced radial distance of the first layer and increased spatial resolution. Based on the most recent developments in pixel detector technologies the material budget is substantially reduced by reducing the thickness of the silicon detector components as well as the readout system, mechanical support and cooling system. The reduction of material budget improves the sensitivity to charm by one order of magnitude or more, depending on the transverse momentum range. Moreover, it implies a better signal-to-background ratio. Therefore fully reconstructed rarely produced heavy-flavour hadrons will become accessible thanks to the efforts of the groups that participated in the ITS upgrade project. In summary, the baseline idea for the layout of the ITS upgrade is to replace the existing ITS detector in its entirety with three inner layers of pixel detectors followed by four outer layers of silicon strips, or pixel detectors, or silicon strips, with lower granularity. In addition, R&D on monolithic pixel detectors is trying to reduce the increased S/N ratio, the power density -which is important in reducing the material budget- and finally the integration time (this is where the R&D on the PIXEL chip is mostly focusing).
Substantial effort was place to reduce the material budget and the thickness of the silicon detector components and the mechanical support of the system.
The above plans for an upgraded Inner Tracking System for ALICE have been recently endorsed by the LHC Committee. This success wouldn’t have been possible without the hard work of so many different people coming from various institutions around the globe. In order to deal with the complex physical and technical requirements of the upgrade plans we organized four different working groups which had to collaborate and at the same time come with solutions on very specific areas. I would like to thank the convenors and the members of each working group for showing this strong commitment to our project.
List of the Institutes participating in the project of the ITS Upgrade.
The upgrade plans of ALICE require a long shutdown (LS) and, therefore, will naturally have to be in phase with the installation of upgrades for the other LHC experiments which are planned for 2013/14 and 2017/2018. The ALICE upgrade targets the long LHC shutdown period in 2017/2018 (LS2). Last but not least, the R&D efforts for the ITS upgrade will continue until 2014 and construction will take place in 2015/17 while the final phase of installation and commissioning is planed for 2018.
TPC Upgrade Plans
The expected lead-lead collision rate in the LHC Run3 (2019 onwards) is 50 kHz. This corresponds, on average, to a collision every 20 μs. In the ALICE TPC, a 90 m3 gas volume where ionization electrons take 100 μs to drift the full 2.5 m distance to the Readout Chambers, the equivalent of 5 events will overlap, which suggest that in order to record the information of all charged particles of all collisions, a continuous readout technique should be used.
This poses two severe problems to the current Multi Wire Proportional Chambers used for the readout of the current TPC. These chambers are composed of an anode wire grid, where the electron amplification occurs, sandwiched between a cathode wire grid and a flat pad plane, where the signals are read out. But in addition, on top of this structure there is another wire grid, called the gating grid, which makes it possible for the detector to perform at high event rates and multiplicities. Introduced in the LEP era (ALEPH, DELPHI), the gating grid allows one to prevent electrons from non-triggered events to reach the amplification region, by applying an alternating voltage on its wires. Upon a trigger, the gating grid is quickly switched to a flat potential thus allowing ionization electrons of this particular event to reach the anode wires and induce a signal onto the pads. Now, the crucial role of the gating grid is to close itself just after all electrons from our event have reached the anode grid, such as to trap the positive ions produced in the avalanches and prevent that they invade the drift volume. In the same way, charge from non-triggered events is never amplified and thus no extra positive ions are produced. This mechanism allows one to keep the drift volume relatively clean of slow drifting ions (160 ms full drift time) which otherwise would build up a considerable space-charge density and lead to important distortions of the electric drift field. For example, if the current TPC would be run, without switching the gating grid, at 50 kHz Pb-Pb, tracks would appear distorted by as much as 1 m. But 1 m distortions are obviously too much to correct for. If we were to use the gating grid, the maximum trigger rate would be determined by the 100 μs for the drift of the electrons (gating grid open) plus another 180 μs for the corresponding ions to reach it (gating grid closed); this is about 3 kHz, much lower than the 50 kHz the LHC is expected to provide. So a gating technique is not possible.
Furthermore, the amount of charge reaching the anode wires would lead to the saturation of the amplification field in their vicinity, thus affecting the uniformity of the gas gain. Now, remember here that one of the functions of the ALICE TPC is particle identification through measurement of the specific energy loss of all charged particles; but if the gain is modified by fluctuating space-charge in the amplification region, the dE/dx determination will be seriously affected. So wire chambers altogether won’t do the job.
So we look to alternative solutions for the readout chambers, and an obvious choice are micro-pattern gaseous detectors, GEMs. These Gas Electron Multipliers, the famous foils with lots of tiny holes introduced by F. Sauli in the 90’s, are certainly capable to cope with the rates and multiplicities we expect and, it is said, provide ‘intrinsic ion blocking’, just what we need. However, standard configurations of triple GEMs do not provide sufficient ion blocking for us. We define the Ion Back-Flow (IBF) as the number of positive ions, after amplification, escaping back into the drift volume per initial primary electron. In order to keep the track reconstruction distortion to a bearable level, of order of 10 cm, the IBF from a GEM structure should be 1 % or below. Standard triple GEMs achieve about 5%. So there is some way to go. After an intensive R&D program, an IBF below 1% has indeed been achieved by using non-standard configurations of stacks of 4 GEMs, with hole pitches different from standard, and with voltages and fields different from standard. It turns out that the minimization of IBF enters in competition with the energy resolution, i.e. the precious dE/dx performance, and by careful optimization of the GEM structure a good compromise has been reached.
One GEM foil used in a full size prototype of the TPC inner chamber.
But what about the stability of such an arrangement? GEMs have been optimized for years in order to be robust against discharges. Although we have departed substantially from the standard configuration, it turns out that the sharing of the gain between four, rather than three, GEMs provide, at our odd fields, the same discharge probability (about 10-8 for alpha particles) as the standard device. It should be noted here that, for various reasons, the operating gas of the upgraded TPC is Ne-CO2-N2 (90-10-5), where the addition of N2 has proven to strengthen the stability.
So we think we have a concept that guarantees charged-particle momentum determination and excellent particle identification through dE/dx. And we keep the beautiful field cage of the TPC.
But there is one more change we have to undertake. The pad plane has now switched roles from being a cathode to, in the GEM case, an anode. This means that the polarity of the signal will be negative. This ‘little detail’, and the need to read out all pads continuously, leads to the necessity to redesign and build a new set of front-end electronics, of which the first test samples are now being tested.
The ALICE experiment has originally been designed as a relatively low-rate experiment. This will not be the case anymore for the Run 3 that is scheduled to start in 2019. By that time most of the ALICE detectors and its computing system are expected to inspect and read-out all the interactions up to a rate of 50 kHz and given a factor of two for the headroom the rate is expected to scale up to 100 kHz. This major upgrade includes: a new, high-resolution, low-material-thickness ITS, an upgrade of the TPC with replacement of the existing read-out chambers with GEM detectors. Part of the ALICE upgrade plans is also a new pipelined read-out as well as a new computing system that will perform the functions presently performed by the DAQ, HLT and offline systems.
The O2 project has been launched to address this extremely ambitious program me which requires a new common organization for the DAQ, HLT and offline projects. The main objective is developing together the ALICE new computing system that should be ready for the Run 3 of the LHC. This system will collect, process and compress up to 1 TByte of raw data per second thus being able to cope with the increased flow of data.
Following the LS1 period and the subsequent upgrades of the LHC and the ALICE detectors a larger number of events will be recorded. More specifically the rate for heavy-ion events handled by the online systems up to permanent data storage should be increased to up to 50 kHz; this corresponds to an increase of roughly two orders of magnitude with respect to the present system. This signals a new challenge for the amount of data that ALICE will have to record and use for the physics analysis. The mean event size has been estimated to be of the order of 23 MByte causing a global data throughput of approximately 1150 GByte/sec being read out from the detector; certainly this large volume of data needs to be substantially reduced. The estimated data throughput to the mass storage after data compression is of the order of 83 GByte/sec peak with an average of 12 GByte/sec.
Amid the challenges that the O2 will have to address, the physics simulation, the online calibration, the online reconstruction and the data quality control are amongst the toughest ones. There is of course an in-depth experience in the ALICE collaboration around these topics. However, the present challenge will be reinforced by the continuous read-out of some detectors which will result in an unceasing flow of data with a mix of superimposed interactions. This will require a complete redesign of all the relevant algorithms.
The requirement to run at a peak frequency of 50 kHz translates into an average frequency of 20 kHz during a complete fill of the LHC. One month of Pb-Pb running will result in the acquisition of 2 x 1010 events which is two orders of magnitude higher compared to the data that were taken in the 2011 Pb-Pb run. This means that while for the current event rates, the data are reconstructed in two months with 104 cores, after the upgrade 106 cores will be needed in order to reconstruct the data from Pb-Pb runs within the same period. In other words, even if we take into account the advancements in the performance of computing systems, the performance of the code needs to be increased at least a factor of 6 in order to cope with these requirements and this can come from the optimization of the current code.
Secondly, the ALICE upgrade will be based on a combination of continuous and triggered read-out. The ITS and TPC detectors are considering implementing a continuous read-out while TOF and TRD will use a triggered read-out using the L0 trigger. Moreover, a delayed trigger will be available for slower detectors. The detector readout is the only part of the system for which it is mandatory to deploy the capability to handle the 100 kHz from the beginning and a total capacity of 25 Tbit/s is currently foreseen. The new readout architecture requires a profound redesign of the DAQ and HLT systems.
Finally, it should be noted that data acquisition and processing will be carried out by a large processor farm based on GPU and CPU. The full event reconstruction will be performed in one Event-building and Processing Node where the final data compression and data recording will be performed before the local data storage.
The computing hardware is another area requiring a lot of R&D. There is also quite some experience in this area within ALICE but the computing arena has evolved quite a lot in the recent years. The former model of one quasi-monopolistic provider of CPU chips (Intel) delivering chips with the same architecture during more than 30 years and doubling their performance every 18 months is now far away. The increase of performance is coming by including more and more cores in every chip and not by the increase of the clock rate. New actors and new architectures are also showing up with the explosion of mobile computing and of powerful co-processors (GPUs, manycores, FPGAs). These changes will require revisiting the current usage of parallel platforms. In order to exploit the parallel hardware being offered by vendors the code has to be adapted to the hardware to a large extent. The calibration and reconstruction of the data will have to run very efficiently on a highly parallel system, with several levels of parallelism of different granularities. This poses another important challenge for the code that will have to be optimized for the newly emerging parallel architectures.
In order to start working on these objectives, the O2 project has initiated a dozen Computing Working Groups (CWGs) on different topics such as the architecture, the tools, the dataflow, the data model, the control and configuration, or the software lifecycle. In order to ease and encourage the joint effort, these CWGs are formed by members of the 3 projects working together on the same topic. Members of the project are meeting on a regular basis during plenary meetings in order to share and review the status of the CWGs. All the ALICE experts in these areas are active in the CWGs dedicated to these issues.
This large effort is added on top of all the other tasks to be accomplished during the LS1 period: analysing data, writing physics papers and preparing for the Run 2. The project is planning to present in September 2014 a Technical Design Report to the LHCC. There are no such things as silence periods in HEP experiments!
To give access to new measurements that are not possible with the present Muon Spectrometer setup and better exploit the unique kinematic range accessible by the ALICE Muon Arm at the LHC, the Muon Forward Tracker (MFT) was proposed in the context of the ALICE upgrade plans, to take place in the years 2017/2018 during the LHC Long Shutdown 2.
The MFT is a silicon pixel detector added in the Muon Spectrometer acceptance (−4.0 < η < −2.5) upstream of the hadron absorber. The basic idea, motivating the integration of the MFT in the ALICE setup, is the possibility to match the extrapolated muon tracks, coming from the tracking chambers after the absorber, with the clusters measured in the MFT planes before the absorber; the match between the muon tracks and the MFT clusters being correct, muon tracks gain enough pointing accuracy to permit a reliable measurement of their offset with respect to the primary vertex of the interaction. As realistic simulations show, correct matching rates as large as 60 % and offset resolutions of ≈ 100 µm are expected already for muons with pT ≈ 1 GeV/c; for pT ≈ 3 GeV/c, less than 10 % of the muons is found to be wrongly matched and the offset resolution stays below 70 µm.
By adding an MFT silicon pixel detector, the rejection power gained against muon pairs from the combinatorial background will allow us to dramatically improve the uncertainties associated to the measurement of the ψ charmonium state, while the identification of displaced and prompt J/ψ production will be made possible by means of the measurement of the pseudo-proper decay length distributions. The enhanced pointing accuracy gained by the muon tracks will also give us the possibility to disentangle open charm and beauty production on the basis of the analysis of the single muon and dimuon offset distributions, starting from pT > 2 GeV/c. Low mass dimuons, finally, will enormously profit from the significantly improved rejection power for the combinatorial background, as well as from the much better mass resolution available for the low mass narrow resonances.