BOOST 2025 is the 17th conference of a series of successful joint theory/experiment workshops that bring together the world's leading experts in theoretical and experimental collider physics to discuss the latest progress and develop new approaches on the reconstruction of and use of jet substructure to study Quantum Chromodynamics (QCD) and search for physics beyond the Standard Model.
The conference will cover the following topics:
🎉 You can contact the 2025 Local Organizing Committee directly by email: 🎉
BOOST-2025-LOC@brown.edu
Previous editions: | International Advisory Committee: | |
|
In 2024 LHC managed to deliver more than 100/fb of integrated luminosity to ATLAS+CMS in a single year, with an average of 57 pileup interactions happening in the same bunch crossing, and thus challenging the CMS detector and jet reconstruction. We present the latest developments in jet calibration and performance for LHC Run3
The precision and reach of physics analyses at the LHC is often tied to the performance of hadronic object reconstruction & calibration, with any incremental gains in understanding & reduced uncertainties being impactful on ATLAS results. Recent refinements to the reconstruction and calibration procedures for jets & missing energy by the ATLAS collaboration has resulted in reduced uncertainties, improved pileup stability and overall performance gains. In this contribution, highlights of these developments will be presented.
Hadronic object reconstruction & classification is one of the most promising settings for cutting-edge machine learning and artificial intelligence algorithms at the LHC. In this contribution, highlights of ML/AI applications by ATLAS to QCD and boosted-object identification, MET reconstruction and other tasks will be presented.
Innovation in jet tagging techniques to identify quark flavours, gluons or boosted heavy particles (tau, W,Z,H,top) has been an important driver to maximally exploit the physics potential of LHC data, and remains an active area of study in CMS. In this talk we present latest algorithm developments and performance results.
Missing transverse momentum is a crucial experimental observable for many analyses of data from detectors at hadron colliders. In the standard model, missing transverse momentum originates from neutrinos. Moreover, beyond the standard model particles like dark matter candidates are also expected to leave the detector undetected. This talk presents a novel missing transverse momentum estimator based on deep neural networks, called “DeepMET”. DeepMET was developed for the CMS experiment at the LHC. It produces a weight for each reconstructed particle in an event. The DeepMET estimator is the negative of the vector sum over all particles of the weighted transverse momenta. The talk presents performance improvements compared to estimators previously employed by CMS and resilience against the effect of additional proton-proton interactions accompanying the collision of interest.
The principal ATLAS calorimeter signals are clusters of topologically connected cell signals. These clusters do not only provide an energy and direction measurement, but also have sufficient shape and other structural information allowing to calibrate to individually calibrate them. The standard approach is to use multi-dimensional binned look-up tables to retrieve scale factors for this calibration. Using a machine-learned calibration overcomes some of the limitations of the standard approach, which are mostly introduced by the loss of correlations between the observables used as input, and bin transition effects. A Bayesian neural network (BNN) has been designed to learn a calibration based on the response of the topo-clusters in fully simulated multi-jet final states in the proton-proton collisions at the Large Hadron Collider (LHC), including the effects of pile-up on the calorimeter signal in ATLAS under operational conditions observed in LHC Run 2 (2015-2018). This network also predicts the uncertainty on this learned calibration. The performance of this calibration is compared to previously used and explored topo-cluster calibrations. The learned uncertainties are validated by a comparison to independently derived uncertainty predictions and their interpretation in the context of detector signal features is discussed.
We present recent progress in CMS towards an improved implementation of the PF reconstruction with the use of machine learning.
We present the first application of graph neural networks using GravNet and object condensation to particle flow reconstruction at the International Linear Collider. By embedding both calorimeter hits and tracking information into a latent space, our deep learning model performs simultaneous particle clustering and energy regression. Charged tracks are treated as condensation anchors, improving the separation of overlapping showers. Evaluated on ILD full simulation with tau and light-quark jet samples, the model outperforms PandoraPFA in clustering efficiency and purity. In boosted topologies, where multiple decay products are densely collimated, reducing confusion in particle flow reconstruction is essential. Our approach shows a marked reduction in clustering ambiguities, offering a promising path forward for high-fidelity reconstruction in high-energy jet environments.
The Lund jet plane is a representation of the emissions within a jet, where each point corresponds to an emission. Hard and soft emissions, as well as colinear and wide-angle emissions, correspond to different regions of the Lund plane and are populated differently by jets with different origins. This means the Lund plane can be used for jet tagging. We present previous studies done in ATLAS on W and top tagging using the Lund plane and recent efforts to improve performance and to reduce background model dependence.
A muon collider combines high center-of-mass energy, clean collision environments, and compact designs into a tool for high energy physics. This study addresses the critical challenge of the beam-induced background (BIB), a byproduct of muon decay that complicates event reconstruction and impacts detector performance. We explore software-based solutions, including algorithms like SoftKiller and various parameter cuts, to mitigate BIB effects. This work outlines future directions for advancing BIB mitigation techniques. The results contribute to the realization of a muon collider as a part of the next-generation of particle physics research.
The identity of dark matter (DM) is one of the most profound questions at the interface of particle physics and cosmology. Weakly interacting massive particles (WIMPs) are promising DM candidates that explain the observed relic density and are under investigation in direct and indirect searches. Within the context of R-parity conserving supersymmetric extensions of the standard model, the WIMP DM candidate is the lightest supersymmetric particle, typically the lightest neutralino.
At CERN's Large Hadron Collider (LHC), a broad set of final states have been used to probe neutralino production using cascade decays of heavier colored and electroweak SUSY particles. These experiments exclude neuralino masses ranging from 100 GeV to roughly 800 GeV, depending on the physics model and the phase space examined. Nonetheless, compressed mass spectra, where the mass difference between the heavier SUSY particles and the neutralino is small, have proved difficult to probe at the LHC. This is due to constraints driven by the ability to trigger, with a low enough rate, on events containing low $p_\mathrm{T}$ objects in addition to experimental difficulties involved with identifying them with high enough efficiency amongst the large hadronic backgrounds.
In this talk, we present a phenomenology study investigating pair production of supersymmetric partners to $\tau$ leptons ("stau leptons" $\widetilde{\tau}$) through pure electroweak vector boson fusion in proton-proton collisions at CERN's Large Hadron Collider (LHC). We examine the theoretically motivated stau-neutralino co-annihilation compressed-mass spectrum phase space that has been traditionally challenging due to experimental constraints, large backgrounds, and low signal cross-sections. This phase space is of particular interest in thermal bino-wino DM cosmology models considering stau-neutralino coannihilation since the mass difference in said models must be $<50$ GeV to obtain the correct DM relic density observed experimentally.
The final states considered have two jets, large missing momentum, and one or two $\tau$ leptons. The analysis utilizes a high-performance, interpretable, sequential attention-based neural network that significantly improves signal sensitivity compared to traditional methods. We report expected signal significances for integrated luminosities of $140$, $300$, and $3000$ $\textrm{fb}^{-1}$ corresponding to the current data acquired at the LHC, expectation for the end of Run 3, and the expectation for the high-luminosity LHC. This methodology results in projected 95% confidence level bounds that cover stau masses up to 850 GeV in the stau-neutralino co-annihilation region within the R-parity conserving minimal supersymmetric standard model; a parameter space that is well-beyond the reach of current searches at the LHC.
As the accuracy of experimental results increase in high energy physics, so too must the precision of Monte Carlo simulations. Currently, event generation at next to leading order (NLO) accuracy and beyond results in a number of negative weighted events. Not only are these unphysical, but they also drive up the computational load and can be pathological when used in machine learning analyses. We develop a post hoc ‘cell reweighting’ scheme by imposing a metric in the multidimensional space of events so that nearby events on this manifold are reweighted together. We compare the performance of the algorithm with different choices of metric. We explicitly demonstrate the performance of the algorithm by implementing the reweighting scheme on simulated data of a Z boson and two jets produced at NLO accuracy.
In hadron collider experiments, it is essential to accurately reconstruct particles from detector information in order to gain a deeper understanding of a various phenomena. In particular, since detectors such as ATLAS have limited spatial granularity in the calorimeter cell, it is challenging to reconstruct particles within jets.
To mitigate this difficulty, we are developing a Particle Flow algorithm based on an ATLAS-like detector. Specifically, we construct a model that takes the calorimeter cell and track information as input and outputs features of each particle, including its number, particle type, momentum, and direction.
One of the main challenges in this task is that the number of output particles is not known in advance. Conventional approaches often require an additional step to estimate the expected number of particles or to explicitly associate detector objects with truth particles.
Inspired by the DEtection TRansformer (DETR) architecture used for object detection, we propose an end-to-end model that directly generates particle candidates from calorimeter and track. In this poster, we present an overview of our model and its performance on a single-jet benchmark.
Particle flow (PFlow) is one of the key detector concepts of future Higgs factories. It went back to more than 15 years ago that a proposal of highly-granular calorimeters with PFlow algorithm giving almost twice better jet energy resolution than traditional calorimetry was published.
The expected performance has not significantly changed since then while the hardware studies have been greatly improved.
We aim to significantly change the situation at the first time in 15 years with a light of recent DNN improvements. After the first trial made with GNN-based algorithm, we are trying to replace it with more recent Transformer-based algorithm. First implementation already gives promising insights. In this poster we will show details of the algorithm, comparison with existing algorithms and future prospects.
Many modern jet clustering algorithms, for example, the anti-$k_T$ algorithm, were proposed after LEP had concluded. This poses a unique opportunity and challenge in order to analyze the archived $e^+e^-$ data from the LEP era. In this poster, we will discuss the methodologies to calibrate jets developed specifically for $e^+e^-$ collisions, as well as a discussion on jet observables that this work allows us to probe.
The fragmentation process in QCD remains an elusive mystery. Energy Correlators, which are correlation functions of energy flow, provide a novel new probe into the dynamics of the QCD phase transition and secondly provide a precision technique for the experimental measurement of fundamental parameters in QCD: i.e. the strong coupling constant. In this talk, we will describe extensions of these ideas to understand the dynamics of jets initiated by heavy quarks, objects whose properties can now be accessed at the precision level at the LHC. I will discuss how heavy quark dynamics itself on imprints this observable along with highlighting how energy correlators provide a unique check on for SCET factorization theorems in the back to back and collinear regions by comparisons to fixed order results for finite angles.
In this talk, I will present our research on the substructure of jets containing heavy flavour. Our main goal is to better understand these jets from a theoretical perspective, using resummed perturbative techniques that are specially designed for jets coming from heavy quarks. In particular, we provide analytical predictions for several key jet substructure observables, including jet angularities and the Lund plane density for b-jets. I will highlight the most significant differences between our results and those obtained in the massless quark approximation.
Energy-energy correlators (EECs) within high energy jets serve as a key experimentally accessible quantity to probe the scale and structure of the quark-gluon plasma (QGP) in relativistic heavy-ion collisions. The CMS Collaboration's first measurement of the modification to the EEC within single inclusive jets in Pb+Pb collisions relative to p+p collisions reveals a significant enhancement at small angles, which may arise from jet transverse momentum $p_T$ selection biases due to jet energy loss. We investigate the dependence of jet EECs on the flavor of the initiating parton. The EEC distribution of a gluon jet is broader and the peak of transition from the perturbative to the non-perturbative regime occurs at a larger angle than that of a quark jet. Such flavor dependence leads to the different EECs for $\gamma$-jets and single inclusive jets due to their different flavor composition. It is also responsible for a colliding energy dependence of EECs of single inclusive jets at fixed jet energy. We also investigate the impact of flavor composition variation on the $p_T$ dependence of the jet EEC. We further propose that a change in the gluon jet fraction in A+A collisions compared to p+p can also contribute to a non-negligible enhancement of the medium modified EEC at small angles. Using the \textsc{jewel} model, we predict the reduction of the gluon jet fraction in A+A collisions and estimate its impact on the EEC.
Jet flavor tagging for linear Higgs factories (ILC, CLIC) has long been done with BDT-based algorithm. Stimulated from recent improvements in LHC experiments, the update has been done with DNN-based algorithm, namely Particle Transformer (ParT). It already shows great improvement of around factor 10 in background rejection for b and c tagging. It also enables to do strange tagging as well as particle-antiparticle separation.
In this talk we will show performance study of flavor tagging with ILD full detector simulation by ParT-based algorithm with improvements made for e+e- collider studies. It includes comparison with fast simulation (for FCC and ILC) which we see significant difference especially on dependence of statistics as well as dependence on detector characteristics. We also discuss ongoing work on the physics application, such as Higgs self coupling and Higgs strange decay which the improvements of flavor tagging gives significant impact.
The accurate identification of heavy-flavour jets is crucial for many aspects of the LHC physics program. However, assigning flavour to jets presents significant challenges, primarily due to potential sensitivity to infrared energy scales. We present the results of a recent study that evaluate jet algorithms designed to be infra-red and collinear safe and applicable in high-precision measurements. We consider several benchmark processes for heavy-flavour production at the LHC, exploiting both fixed-order and parton-shower simulations. We analyse the infrared sensitivity of these new algorithms at different stages of the event evolution and compare them to flavour labelling strategies currently adopted by the LHC collaborations.
Accurate identification of jets that originate from heavy-flavor hadrons is pivotal for many ATLAS analyses, from Higgs-boson and top-quark measurements to searches for new physics. We present the newest heavy-flavor taggers from ATLAS, which introduces a full-transformer architecture tailored to the environment of Run 2 and Run 3.
The flavor tagging transformer processes low-level track, vertex, neutral particle, and muon information to extract correlations between the inputs and infer the origin of the jet. Compared with the current Run 3 baseline, the new model achieves better separation of $b$- and $c$-jets from light-flavor jets across a wide kinematic phase-space.
In this talk, we will discuss the architecture and training workflow, as well as the newest results. Applications in boosted object tagging or trigger may also be discussed.
sPHENIX is the first newly constructed detector at a hadron collider in over a decade, featuring a compact design and a set of unique, purpose-built capabilities not previously available at RHIC. The sPHENIX heavy-flavor program at RHIC is designed to address fundamental questions about the strongly interacting Quark-Gluon Plasma (QGP) using jets originating from heavy-quarks as precision probes to bridge the gap between experimental results and theoretical models of QCD matter. The study of heavy-flavor jets is complementary to heavy-flavor hadrons and provides a great opportunity for comparisons with light-flavor jets.
At its core, sPHENIX employs the Monolithic Active Pixel Sensor (MAPS)-based Vertex Detector (MVTX), the Intermediate Silicon Tracker (INTT), a Time Projection Chamber (TPC), and a Micromegas based Time Projection Chamber Outer Track (TPOT), which together provide the high-precision vertexing and tracking essential for reconstructing heavy-flavor hadrons within the pseudorapidity range |$\eta$| < 1.1. Jet reconstruction at sPHENIX will leverage this precise tracking in combination with full azimuthal coverage from the Electromagnetic Calorimeter (EMCal) and the first Hadronic Calorimeter (HCal) at midrapidity at RHIC, enabling full reconstruction of jets, including both charged and neutral components.
This talk presents the strategies and methods developed for identifying jets tagged with heavy-flavor hadrons at sPHENIX. We will also report on the plans for heavy-flavor jet substructure measurements and the current status of heavy-flavor hadron and jet reconstruction based on the dataset collected during the 2024 run.
Heavy flavor (charm and bottom) production is a unique probe to test the perturbative Quantum Chromodynamics (pQCD) and study the transport properties of nuclear media. Heavy flavor signal searching is one of the most challenging measurements in collider experiments due to their ultra-low production rate and extensive backgrounds. The brand new sPHENIX experiment at the Relativistic Heavy Ion Collider (RHIC) is optimized for precise heavy flavor measurements to reveal the properties of quark gluon plasma. The future Electron-Ion Collider (EIC) will utilize high-luminosity high-energy electron+proton ($e+p$) and electron+nucleus ($e+A$) collisions at different center of mass energies (29 - 141 GeV) to solve several fundamental questions such as the hadronization mechanism. We will present the heavy flavor jet studies using both the traditional selection methods and new Machine Learning (ML) algorithms in sPHENIX 200 GeV $p$+$p$ simulation. A Graph Neural Network (GNN) is used to tag the jet flavor, and is expected to significantly enhance the jet identification performance especially for bottom and charm jets. A series of heavy flavor hadron and jet physics studies have been carried out in standalone simulations with parameterized EIC detector performance. We will present the open heavy hadron and jet reconstruction capabilities of the EIC and the associated physics projections in comparison with recent theoretical calculations. The impacts of these studies to advance our understanding of the flavor dependent parton energy loss and the flavor dependent hadronization process will be discussed as well.
The mass of heavy quarks modifies the radiation pattern of heavy-quark jets in comparison to their light quark counterparts, since the heavy quark mass effectively regularizes the soft and collinear divergences that would normally dominate the partonic cascade formation. This leads to the depletion of collinear gluon emissions relative to the heavy quark, an effect known as the dead cone effect. The dead cone of heavy-quark jets has been identified as a possible venue to isolate medium-induced radiation in a phase-space region where calculations are viable and where the large underlying event of a heavy-ion collision is absent. Previous measurements based on the construction of an angle-ordered tree of intrajet emissions have shown that it is possible to expose the dead cone experimentally. Novel jet substructure observables and algorithms are used to isolate hard and collinear emissions in the dead cone region with an improved sensitivity to charm quark mass effects using $D^0$-tagged jets in pp and PbPb collisions at 5.02 TeV. For the first time, the substructure of charm quark jets with a $p_{\rm T}$ greater than 100 GeV is analyzed, in a regime that should be relatively insensitive to nonperturbative effects. It is shown that the sensitivity to quark mass effects is present even at high $p_{\rm T}$.
Jet substructure measurements in heavy-ion collisions offer vital insights into the dynamics of jet quenching within the hot and dense QCD medium generated in these events. In this talk, we present new results from the ATLAS Collaboration on jet suppression and substructure using the Soft-Drop grooming technique in Pb+Pb and pp collisions at
sqrt(s(NN)) = 5.02 TeV. The study explores jet splitting across a broad range of angles for large-radius jets (R = 1.0), using charged particles to achieve high precision, providing access to small angular separations. This work unifies two previously published ATLAS analyses on small- and large-R jets, providing a more comprehensive view of jet substructure. The degree of jet suppression is characterised by the nuclear modification factor R(AA), presented as a function of jet transverse momentum pT, the opening angle of the hardest internal splitting rg, and the transverse momentum scale sqrt(d(12)). By comparing these results with theoretical models, we deepen our understanding of jet quenching mechanisms, explore the properties of the QCD medium, and challenge current theoretical frameworks in heavy-ion collisions.
Jets are powerful probes used to improve our understanding of the strong force at short distances. The radiation pattern of jets can be visualized via the Lund jet plane, a two-dimensional representation of the phase space of intrajet emissions using the splitting angle and the relative transverse momentum of the emission relative to the emitter. The Lund jet plane allows for the separation of nonperturbative and perturbative effects in a modular fashion, allowing for strong constraints in MC event generators and for robust comparisons with first-principles QCD calculations. In heavy ion collisions, the Lund jet plane in addition can be used to obtain a spacetime picture of the evolution of the jet shower as it traverses the quark-gluon plasma created in the collision. In this talk, we discuss new CMS jet substructure measurements in pp and PbPb collisions based on the Lund jet plane representation in inclusive jets with a GeV.
n this talk, we present the first measurement of jet substructure modification in the QGP medium with fully reconstructed jets using $\Delta j$ observable, which quantifies the distance between two types of jet axes constructed from the same jet constituents. The E-scheme anti-$k_T$ and Winner-Takes-All axes are employed, providing different sensitivities to soft and semi-hard medium-induced radiation. The first CMS measurement of the fully unfolded distributions of the angular separation between these axes for anti-$k_T$ $R=0.4$ jets in 5.02 TeV PbPb collisions will be presented for several collision centralities and jet $p_T$ intervals. Significant narrowing in the $\Delta j$ distributions is observed in central compared to peripheral collisions across all $p_T$ intervals, suggesting a QGP-induced substructure modification. To test an alternative explanation for the narrowing that could be provided by the predicted color-charge dependence of energy loss, the data are compared to predictions from several models, including those with modified quark/gluon fractions due to quenching. These comparisons indicate that differences in quark/gluon energy loss alone cannot fully describe the central data, suggesting additional mechanisms for medium-induced substructure modification. These measurements probe jet substructure over the widest jet $p_T$ range in a previously unexplored kinematic domain, providing new constraints on the color-charge dependence of energy loss.
Precision studies of jets and their substructure at the LHC require a robust theoretical description for anti-$k_t$ jet production. For small-radius jets, the cross section can be factorized into parton distribution functions, a hard function, and a jet function that encodes the jet clustering effects. We present the two-loop calculation of this jet function and uncover previously missing logarithms in the standard DGLAP factorization. Building on the recent understanding of energy correlators, we propose a corrected factorization formula and perform the next-to-next-to-leading logarithmic resummation for small-radius jet production at the LHC. This result shows good agreement with the CMS measurements. Our framework extends naturally to a broad class of jet substructure observables and other collider experiments, offering the potential for more precise QCD measurements.
N-point energy correlators have recently gained prominence in the field of high energy particle and nuclear physics due to features that make the observable a unique probe of multi-scale QCD evolution. Several experimental collaborations have either measured the 2-point energy-energy correlator (EEC), an observable that quantifies the energy product weighted particle pairs within a jet or within a full collider event at a fixed opening angle. This has direct connection to conformal theory approaches that separates the different regimes within the observable as a function of the opening angle. In this talk, I will start with the the recent CMS measurement of inclusive jet EECs in heavy ion collisions and walk through the different theory and phenomenology studies aimed at understanding the data. We close the talk by discussing the next steps in evaluating scale dependent QCD evolution at various temperatures in proton-proton, proton-Ion and Ion-Ion collisions.
Jet substructure observables are an effective probe of QCD in many environments. In the vacuum of pp collisions, they simultaneously probe the weakly coupled properties of parton showers and the strongly coupled phenomena of hadronisation. In the presence of the deconfined QCD medium, jet substructure observables provide a multi-scale tool to test the scale-dependent evolution of the QGP medium's degrees of freedom. The excellent tracking capabilities of the ALICE detector enable jet substructure measurements down to low $p_{\textrm{T}}^{\textrm{ch jet}}$, probing the interface of pQCD and npQCD regimes, and with excellent precision in the challenging high occupancy in Pb--Pb collisions. We present a set of new jet substructure measurements in pp and heavy-ion collisions with ALICE, including energy correlators, jet angularities, and substructure of charm-tagged jets. Together, these measurements further our understanding of QCD and the properties of the QGP medium.
The experimental study of energy-energy correlators is rapidly emerging as a powerful tool for understanding the QCD dynamics that govern jet formation at the LHC. These observables have a particularly close relationship with the operators in the underlying field theory, allowing for strong theory-experiment correspondences and relatively straightforward theoretical interpretations of experimental results. This talk presents a detailed study of the QCD four-point operator using resolved energy-energy correlators in high-energy jets at the CMS experiment. Our results provide a unique view of the QCD dipole and tripole and are sensitive to angular correlations in the QCD showering dynamics.
One of the fundamental challenges in studying QCD and jet physics is the different effective degrees of freedom at different energy scales. The hard scattering processes which form jets, as well as the jet evolution, are described in terms of weakly-interacting partons, whereas the particles we observe in our detectors and do measurements on are free hadrons. Theoretically, this means one does calculations with partons as the relevant degrees of freedom and then appropriately translates the parton level predictions into some statement on the distribution of hadrons. In other words, this amounts to some understanding of matrix elements between partonic and hadronic states. In this work, we develop this procedure for multi-point correlation functions of general detector operators in a confining field theory, specifically QCD. This class of detector operators generalize the ANE operator $\mathcal{E}$, from which the energy correlator is built, and are sensitive to generic powers of energy measured on an arbitrary subset of hadrons. In order to describe these more general detector operators, we introduce a broad set of universal, non-perturbative “detector functions,” enabling us to derive factorization theorems which separate perturbative and non-perturbative physics. We also study the leading non-perturbative corrections to these observables in QCD, which are large enough to significantly modify the perturbative scaling, and perform a numerical study using parton shower simulations, showing that we are able to describe the complicated behavior of these more general correlators. This work significantly broadens the space of detector operators over which we have theoretical control, including those that are compelling for phenomenological studies. Most notably, the ability to increase the energy weighting can significantly suppress the effects of soft particles, which is a useful feature in heavy-ion studies.
The jet mass of W bosons decaying to a quark-antiquark pair is measured in W+jet events from proton-proton collisions in the LHC at a center-of-mass energy of $\sqrt{s}$ = 13 TeV. W bosons with large transverse momentum (boost) produce strongly collimated decay products reconstructed as single large-radius jets. Jets initiated by W bosons with a characteristic two-prong substructure are distinguished from single quark- and gluon- initiated jets in background using a jet substructure observable. We report the double-differential cross section in bins of jet transverse momentum and jet soft drop mass. A first measurement of the W mass in an all-jets final state at a hadron collider is reported.
The production of W/Z bosons in association with jets at the LHC provides an important test of perturbative QCD. In this talk, differential cross-sections of W boson production with at least one jet are measured for events in which the W boson decays in the electron or muon channels, and compared to predictions at next-to-next-to-leading-order (NNLO), with an emphasis on the phase space region in which the W boson and the leading jet are collinear. Moreover, measurements of diboson events with one of the two bosons decaying hadronically are presented.
Measuring the Higgs boson production at high transverse momentum pT is a core component of the Higgs boson physics program at the LHC. The high-pT regime poses challenges to precise theoretical predictions and is a sensitive probe for indirect searches of new physics. This talk will review recent results of the CMS experiment in the search for Higgs boson production with high Lorentz boost.
The large production cross section of top-quark pairs at the LHC allows for detailed studies of the substructure of jets arising from light quarks, b-quarks, and gluons. In this talk, recent measurements of the jet substructure in the decay products of top quarks performed by the ATLAS experiment are presented, using the reconstructed charged particles in the decay of W bosons and the fragmentation of b-quarks. One- and two-dimensional differential cross-sections for eight substructure variables, defined using only the charged components of the jets, as well as a measurement of the Lund plane are discussed. The observed substructure distributions are compared with several MC generator predictions using different phenomenological models for parton showering and hadronization. The mass of the top-quark is an important parameter of the Standard Model, affecting the dynamics of elementary particles through radiative corrections. Precision top-quark mass measurements provide information for global fits of electroweak parameters, making it an essential tool to test the coherence of the SM and to probe its extensions. This talk presents a new measurement of the top-quark mass using a profile likelihood fit for ttbar events in the lepton + jets decay channel in which the hadronically decaying top-quark has high transverse momentum. The measurement is performed using data from pp collisions sqrt(s)=13TeV collected by the ATLAS detector at the Large Hadron Collider from 2015 to 2018, corresponding to an integrated luminosity of 140 fb−1. In this high-pT or “boosted” regime, the top-quark decay products become collimated in the detector leading to less ambiguity in assigning jets and allow the decay products to be captured in a single large-radius jet. The mean of the invariant mass of the reconstructed large-radius jet provides the sensitivity to the top quark mass and is simultaneously fitted with two additional observables to reduce the impact of the systematic uncertainties.
We present a study of the density of emissions in the Lund Jet Plane for hadronic decays of boosted top quarks. We consider the case where all top quark decay products are reconstructed in a single large jet. The three-prong decay of the top quark offers a rich substructure where the two quarks from the W decay are not color-connected to the b quark, and all three quarks originate from the decay of a heavy resonance. We study the radiation pattern between these quarks and show how to disentangle effects from the parton shower and the hadronisation process.
High charged-particle multiplicity event has been a central focus in the study of collective behavior across both large and small collision systems. A previous measurement of two-particle angular correlations in $e^+e^-$ collisions at center-of-mass energies up to $\sqrt{s} = 209,\mathrm{GeV}$, using LEP-II data, revealed intriguing discrepancies with Monte Carlo predictions at high multiplicity, suggesting the possible emergence of long-range near-side correlations even in the simplest collision system. Unlike at lower energies, where quark-antiquark production dominates, $W^+W^-$ processes become increasingly important at multiplicities above 40. Could the observed excess in long-range correlations be explained by the more complex color-string configurations arising from $W^+W^-$ production, or is it simply a consequence of the higher final-state multiplicity enabled by the increased collision energy, independent of the underlying production mechanism?
To address these questions, we present a measurement of two-particle angular correlations in $e^+e^-$ collisions at $\sqrt{s} = 183\text{–}209,\mathrm{GeV}$, with a focus on enhancing the contribution from $W^+W^-$ processes. The analysis uses data collected by the ALEPH detector during the LEP-II program. Correlation functions are evaluated across a broad range of pseudorapidities and full azimuth, in bins of charged-particle multiplicity. The correlation functions are further decomposed into a Fourier series, and the resulting harmonic coefficients $v_n$ are compared with predictions from event generators. These results provide new insights into the emergence of long-range correlations in small systems and offer important context for similar phenomena observed in proton-proton, proton-nucleus, and nucleus-nucleus collisions.
Measurements of a set of jet substructure observables that spans the kinematic phase space of six emissions in a jet are presented in 1-, 2- and 3-pronged large-radius-jet topologies at high transverse momentum. Light quark-(gluon-)like jets from QCD dijet events, and events enriched in boosted, hadronic decays of W bosons and top quarks in the muon+jets channel of $t\overline{t}$ production are analysed. The dataset recorded by the CMS experiment from proton-proton (pp) collisions at $\sqrt{s}=13$ TeV during Run 2 of the LHC is used, corresponding to a total integrated luminosity of approximately $138\ \mathrm{fb}^-1$. A detailed characterisation of the jet substructure is provided by measuring twenty-five $N$-subjettiness observables. These constitute a $6$-body basis that overconstrains the kinematic phase space of emissions in a jet. The saturation of discrimination power for W/top versus QCD jet classification is studied, and the choice of the basis of observables for the measurement is motivated. Measurements of all observables in the overcomplete $6$-body basis are then simultaneously unfolded, providing an estimate of the correlations between their distributions at the particle level. Distributions of the individual observables, extracted from the simultaneous unfolding, are then compared to predictions from different combinations of Monte Carlo event generators and parton shower programs.
State-of-the-art machine learning models in particle physics, such as PELICAN
[1] and Particle Transformer [2], exhibit disadvantageous $\mathcal{O}(N^2)$ scaling with the multiplicity $N$ of the jets. Fundamentally, this has to do with the tendency for highly expressive models to involve complicated permutation-equivariant layers to model pairwise interactions. Similar limitations arise in other fields that use graph neural networks, such as chemistry. I will present a radically simpler architecture, CARDINAL
, which balances universal expressivity and ultra-low computational cost by employing a permutation-invariant (and also Lorentz-invariant) embedding which allows for $\mathcal{O}(N)$ scaling. In common benchmark tests, it is tens or hundreds of times faster and less memory-demanding than comparable popular architectures.
[1] Alexander Bogatskiy, Timothy Hoffman, David W. Miller, Jan T. Offermann & Xiaoyang Liu, Explainable equivariant neural networks for particle physics: PELICAN, 2024.
[2] Huilin Qu, Congqiao Li, Sitian Qian, Particle Transformer for Jet Tagging, 2022.
Modern ML-based taggers have set new benchmarks for jet classification tasks at the LHC, surpassing traditional algorithms in performance. However, their opaque decision-making processes pose challenges for interpretability. In this work, we investigate what a low-level tagger learns when trained on quark-gluon discrimination. We identify a small set of learned latent features that correlate strongly with physics observables. Remarkably, only three latent features are sufficient to capture the full discriminative power. Moreover, we apply symbolic regression to derive compact analytic expressions that approximate the tagger output in terms of interpretable features.
Fixed-order perturbative calculations for physical cross sections can suffer from non-physical artifacts: they can be non-finite, non-positive, non-normalizable, and susceptible to large logarithmic corrections. We propose a framework that, given a fixed-order perturbative expression for an observable to some finite order in $\alpha$, will ``resum'' the expression in a way that is guaranteed to be finite, positive, normalized, and match the original expression order-by-order. Moreover, our ansatz parameterizes all possible finite, positive, and normalized completions consistent with the original fixed-order expression, including but not limited to N$^n$LL expressions.
We propose $w_i f_i$ ensembles, a novel framework to obtain asymptotic frequentist uncertainties on density ratios in the context of neural ratio estimation. In the case where the density ratio of interest is a likelihood ratio conditioned on parameters, for example a likelihood ratio of collider events conditioned on parameters of nature, it can be used to perform simulation-based inference on those parameters. We show how uncertainties on a density ratio can be estimated with ensembles and propagated to determine the resultant uncertainty on the estimated parameters. We then turn to an application in quantum chromodynamics (QCD), using ensembles to estimate the likelihood ratio between generated quark and gluon jets. We use this learned likelihood ratio to estimate the quark fraction in a mixed quark/gluon sample, showing that the resultant uncertainties empirically satisfy the desired coverage properties. We also examine the performance of this inferred likelihood ratio for reweighting gluon to quark jets, using the distributions of angularities as a benchmark.
With the increasing size of the machine learning (ML) model and vast datasets, the foundation model has transformed how we apply ML to solve real-world problems. Multimodal language models like chatGPT and Llama have expanded their capability to specialized tasks with common pre-train. Similarly, in high-energy physics (HEP), common tasks in the analysis face recurring challenges that demand scalable, data-driven solutions. In this talk, we present a foundation model for high-energy physics. Our model leverages extensive simulated datasets in pre-training to address common tasks across analyses, offering a unified starting point for specialized applications. We demonstrate the benefit of using such a pre-train model in improving search sensitivity, anomaly detection, event reconstruction, feature generation, and beyond. By harnessing the power of pre-trained models, we could push the boundaries of discovery with greater efficiency and insight
Precise, high-energy $e^{+}e^{-}$ data remain essential for a global understanding of particle physics. Despite decades since the end of the Large Electron-Positron Collider (LEP), such data continue to play a central role in precision measurements and the tuning of phenomenological models in parton shower Monte Carlo (MC) simulations. One particularly important observable is thrust, which probes QCD dynamics by measuring the alignment of hadronic final states into dijet-like topologies. Thrust has been widely used to extract the strong coupling constant $\alpha_{s}(m_{Z})$, tune hadronization models, and study non-perturbative effects. Recent theoretical developments have renewed the focus on thrust, revealing discrepancies between precision extractions of $\alpha_{s}(m_{Z})$ from dijet observables and the world average, and emphasizing the relevance of thrust moments in constraining both perturbative and non-perturbative QCD components. The fixed binning choices in existing LEP thrust measurements, however, has emerged as a key limitation that restricts the scope of studies utilizing the results. In this work, we present the first unbinned measurement of the $\log(1 - T)$ distribution in $e^{+}e^{-}$ collisions at $\sqrt{s} = 91.2$ GeV using archived data from the ALEPH experiment at LEP. The thrust is reconstructed from charged and neutral particles in hadronic $Z$ decays. Detector effects are corrected using an ML-based method for unbinned unfolding called UniFold, allowing downstream fully differential and variable binning studies. This measurement provides valuable input for ongoing theoretical developments, MC tuning, and opens the door to new studies of $e^{+}e^{-}$ data.
(contribution via Zoom)
The practice of collider physics typically involves
the marginalization of multi-dimensional collider data to uni-dimensional observables relevant for some physics task. In many cases, such as discrimination or anomaly detection, the observable can be arbitrarily complicated, such as the output of a neural network. However, for precision measurements, the observable must correspond to something computable systematically beyond the level of current simulation tools.
In this work, we demonstrate that precision-theory-compatible observable space exploration can be systematized by using neural simulation-based inference techniques from machine learning. We illustrate this approach by exploring the space of marginalizations of the energy 3-point correlation function to optimize sensitivity to the the top quark mass.
We first learn the energy-weighted probability density from simulation, then search in the space of marginalizations for an optimal triangle shape. We
find that isosceles triangles with a side ratio of $1:1:\sqrt{2}$ (i.e. right triangles) improves over the equilateral triangles used previously.
Although simulations are used as a leading-order approximation to theory, and machine learning is used to find an optimal observable, the output is an observable definition which can be then computed to high precision and compared directly to data without any memory of the computations which produced it.
In recent years, there has been a resurgence of interest in energy-energy correlators (EECs) for the study of hadronic collisions at both the LHC and RHIC ranging from small to large systems. Measurements of EECs of particles within jets offer a clear separation of scales that is useful for studying both perturbative and non-perturbative QCD in the collinear limit as well as the transition between these two regimes. In the $e^{+}e^{-}$ environment, measurements of EECs can be performed using all particles in the event, allowing for the back-to-back (or Sudakov) limit to also be investigated. This talk presents a high precision fully-corrected EEC results from the archived ALEPH $e^{+}e^{-}$ data taken at LEP at $\sqrt{s}$ = 91.2 GeV. The data are compared to archived PYTHIA 6 MC produced by the ALEPH collaboration as well as theoretical prediction with Next-To-Next-To-Next-To Leading Log resummation in the collinear limit and Next-To-Next-To-Next-To-Next-To Leading Log in the Sudakov limit, showing excellent agreement with data. This measurement allows for precision tests of QCD to be performed in the relatively unexplored Sudakov limit of QCD.
We present a measurement of thrust, energy-energy correlators, and two-particle angular correlations of charged particles in $e^+e^-$ ) collisions at center-of-mass energies up to $\sqrt{s} = 209\,\mathrm{GeV}$, using newly released open data access from the DELPHI experiment at LEP-I and -II. The thrust and energy-energy correlator, measured with unprecedented resolution and precision, are compared with various MC and analytic predictions. These analyses, leveraging DELPHI’s unique detector geometry and reconstruction capabilities, complement previous DELPHI and recent ALEPH re-analysis results.
In addition, this study explores potential collective behavior in $e^+e^-$ collisions by analyzing angular correlations over a wide range of pseudo-rapidities and full azimuth, as a function of event charged-particle multiplicity with respect to thrust. Earlier LEP-I results at $ \sqrt{s} = 91\,\mathrm{GeV}$—primarily from $ Z $ boson decays, showed no significant long-range correlations and were consistent with PYTHIA v6.1 expectations. However, the higher-energy LEP-II environment introduces richer dynamics, including enhanced $ W^+W^- $ production and multi-jet final states, leading to higher multiplicities and more complex event topologies. Together with archived ALEPH and Belle data, they provide valuable benchmarks for future studies of collective behavior in high-energy collisions and broaden the landscape of correlation measurements in small collision systems.
Many theories beyond the Standard Model (SM) have been proposed to address several of the SM shortcomings, often predicting new particles which can be searched for at the LHC. This can include extended Higgs sectors, supersymmetric particles, heavy vectors or scalars, vector-like fermions, and further exotic particles. This talk will cover several related searches, focusing on prompt topologies and conventional analysis techniques.
Searches for new resonances in di-boson (VV, VH, HH, where V = W, Z) and tri-boson final states, with the CMS detector are presented. The analyses are optimised for high sensitivity over a large range in resonance mass. Jet substructure techniques are used to identify hadronic decays of highly-boosted W, Z, and H bosons. A statistical combination of these searches provides the most stringent constraints on heavy vector bosons with large couplings to standard model bosons and fermions.
In this talk, we present our extension of the concept of maximal quantum entanglement from proton structure to jet fragmentation in proton-proton collisions, establishing a connection between jet fragmentation functions and charged hadron multiplicity [1]. This relationship is tested using ATLAS data from the Large Hadron Collider, showing excellent agreement. As the first study to apply quantum entanglement concepts to hadronization within jets, our results provide new insights into the quantum aspects of hadronization and the transition between perturbative and non-perturbative QCD, deepening our understanding of confined nuclear matter.
[1] J. Datta, A. Deshpande, D. E. Kharzeev, C. J. Naïm, and Z. Tu, “Entanglement as a Probe of Hadronization,” Phys. Rev. Lett., vol. 134, no. 11, p. 111902, 2025, doi: 10.1103/PhysRevLett.134.111902.
Avoided level crossing is a ubiquitous phenomenon across various fields in physics. In this talk, I will briefly describe the theoretical foundations to formulate the DGLAP/BFKL (Dokshitzer–Gribov–Lipatov–Altarelli–Parisi/Balitsky-Fadin-Kuraev-Lipatov) mixing physics as an avoided level crossing in QCD. We propose a family of complex event shape observables in collider physics to search for such a phenomenon. I will present the results from both Monte Carlo simulation and CMS Open Data to confirm our theoretical prediction, which provides a strong evidence for the level repulsion phenomenon in QCD dynamics.
This talk presents recent advancements in the angular-ordered parton shower algorithm, extending Beyond the Standard Model (BSM) radiations on top of the existing Standard Model (SM) only algorithm. Capitalising on the fact that shower kinematics are fully determined by the spins of involved particles, helicity-dependent splitting functions for all viable combinations of scalar bosons, fermions, and vector bosons have been calculated. These functions are integrated into Herwig 7, facilitating a user-friendly implementation of BSM parton showers. The accuracy of this framework is validated through comparisons with fixed-order (FO) calculations, confirming that the BSM splitting precisely dictates FO results in the soft and collinear regions. Moreover, implications of BSM particle splitting at the Large Hadron Collider and possible future colliders are explored, specifically examining some simplistic extensions of the SM. This work not only enhances our understanding of the parton shower process in more generalised scenarios but also opens new avenues to study BSM contributions in the logarithmically enhanced region.
This talk presents the latest results from searches for signatures targeting hidden valley models performed in proton-proton collision data recorded by CMS. These models propose new strong-like forces, sometimes called "dark QCD" which lead to composite dark matter in the form of "dark hadrons" that are difficult to search for and evade the usual collider dark matter searches. Evidence of these maodels can still be found in collider datasets by utilizing specific signatures, including semivisible jets, emerging jets, and soft unclustered energy patterns. These searches utilize boosted substructure techniques and machine learning algorithms to increase sensitivity and our ability to perform searches for these signatures.
Many theories beyond the Standard Model (SM) have been proposed to address several of the SM shortcomings. Some of these beyond-the-SM extensions predict new particles or interactions directly accessible at the LHC, but which would leave unconventional signatures in the ATLAS detector. These unconventional signatures require special techniques and reconstruction algorithms to be developed, enabling analysers to perform unique searches for new physics. Conversely, some searches for more standard models also make use of unconventional workflows to improve sensitivity. This talk will cover several such recent searches at ATLAS.