25th IEEE Real Time Conference - La Biodola, Elba, Italy
La Biodola - Isola d'Elba (Italy)
Welcome to the 25th IEEE Real Time Conference

Like the previous editions, RT2026 will be a multidisciplinary conference devoted to the latest developments on real time techniques in the fields of plasma and nuclear fusion, particle physics, nuclear physics and astrophysics, space science, accelerators, medical physics, nuclear power instrumentation and other radiation instrumentation.
The conference will provide an opportunity for scientists, engineers, and students from all over the world to share the latest research and developments. This event also attracts relevant industries, integrating a broad spectrum of computing applications and technology.
Women in Engineering Event

-
-
08:30
→
09:00
Opening talks Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
-
09:00
→
09:40
Invited talk Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
Convener: David Abbott-
09:00
Data Acquisition and Processing for Next-Generation PET: From Wearable PET to Total-Body PET 40mSpeaker: Dr Qiyu Peng (Lawrence Berkeley National Laboratory)
-
09:00
-
09:40
→
11:00
Real Time Diagnostics, Digital Twin, Control, Monitoring, Safety and Security Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
Convener: Martin Grossmann PSI (Paul Scherrer Institut)-
09:40
Evaluation of APODIS Plasma Disruption Predictor within the ITER Real-Time Framework 20m
Plasma disruptions in Tokamaks are one of the threats to the secure operation of nuclear fusion devices. Disruptions are a sudden loss of plasma confinement. The disruptive events release large amounts of energy that impact the first wall components, affecting their integrity. This work presents the development of a disruption predictor implemented in the ITER Real-Time Framework (RTF). The objective is to test the RTF platform response under very demanding conditions, such as predicting disruptions. The predictor evaluated is the JET Advanced Predictor Of DISruptions (APODIS), developed during the first JET experimental campaign of the ITER-like Wall (ILW). The predictor has been trained and tested with JET data. The real-time prediction meets the strict latency and determinism requirements of real-time control systems for fusion devices. Additionally, the Synchronous Databus Network (SDN) has been utilized for low-latency data transmission via a publisher-subscriber model. To guarantee real-time performance, several optimization techniques have been applied, including real-time threading, CPU core isolation, and network card parameters fine-tuning. Validation was performed through numerical verification and detailed latency characterization. Results demonstrate that each APODIS prediction in the RTF platform requires an average processing time below 100$\mu s$, with a standard deviation under 10$\mu s$ and a maximum outlier of 300$\mu s$. Considering that the sampling period of the signals can be O(1 ms), these times are enough for the problem at hand. Therefore, the proposed solution confirms the feasibility of the RTF platform for real-time demanding tasks (for instance, disruption prediction) in next-generation nuclear fusion devices.
Speaker: German Arranz (Universidad Politécnica de Madrid) -
10:00
Synchronization of the PLC Subsystems with the Timing Distribution System at the European XFEL 20m
The European XFEL generates bursts of up to 2700 ultra-short x-ray flashes with a spacing of only 220 nanoseconds every 100 ms. These flashes are produced in the linear electron accelerator by undulators and guided 1 kilometer in vacuum pipes until they reach the scientific experiments. Accurate synchronization and the availability of information about all configurable pulses within a single burst, which can vary every 10 hertz cycle, enable the full potential of control, monitoring, and measurement in terms of accuracy and utilization. The MicroTCA-based timing system provides all the necessary capabilities. However, the industrial programmable logic controllers (PLCs), which manage most of the control electronics of the beamlines and experiments, are only synchronized on a millisecond level through UART and NTP interfaces. This paper presents a new development based on an FPGA SoC EtherCAT solution in MicroTCA that implements an interface to the PLC using distributed clocks to achieve synchronization on microsecond
level and provide beam-related information in a deterministic manner.Speaker: Dmytro Levit (European XFEL GmbH) -
10:20
Configurable real-time emergency pulse termination system for investment protection in ITER’s interlock architecture 20m
This paper presents the design, implementation, and verification of ITER's Central Interlock System Critical Gateway (CIS-CG), a system for real-time emergency pulse termination. The CIS-CG serves as the critical interface coordinating protective actions among the Plasma Control System (PCS), the Advanced Protection System (APS) and the Central Interlock System (CIS), during fault conditions.
The system implements a general solution that addresses the ITER investment protection goals through configuration, while enabling early integration with evolving plant designs. Its functional architecture includes 42 configurable stop-codes mapped to 32 composite interlock actions through a priority-based countdown mechanism. This design enables adaptable protection strategies through configuration rather than hardware modifications as operational requirements and risks evolve.
The dual-redundant architecture of the system operates with 100 µs control loop timing across the Main Server Room (MSR) and the Backup Server Room (BSR), utilizing CIS Fast Architecture FPGA-based technology. Operation of this interlock system bridges the gap between ITER's pulsed interlock operating states and ITER's Common Operating States (COS) that is responsible for orchestrating pulsed operation between the tokamak diverse plant systems. This system represents the first IEC61508 compliant implementation enabling critical parameter communication from the pulse-schedule to the interlock layer.
Test results demonstrate the system's capability to meet stringent reliability requirements while maintaining configurability and fail-safe operation throughout ITER's experimental campaigns.Speaker: Damien Karkinsky (ITER) -
10:40
Firmware Architecture for the ePixUHR Detector Enabling 35-kHz X-ray Imaging with Real-Time FPGA/GPU Processing 20m
The ePixUHR 35-kfps detector system addresses the need for ultra-high-rate x-ray imaging in next-generation FEL experiments by combining a modular 1-megapixel “SuperTile” detector architecture with an integrated real-time FPGA/GPU processing pipeline. Each SuperTile incorporates compact sensor/ASIC units and high-bandwidth front-end electronics capable of sustaining more than 80 Gb/s of streaming data while remaining fully synchronized with LCLS-II timing. The backend processing chain performs deterministic event assembly, dual-gain pixel processing, and GPU-accelerated data reduction. We demonstrate reliable 35-kHz acquisition and high-throughput delivery of detector data to the GPU, with ongoing work focused on completing the GPU processing pipeline and integrating with experiment data systems. These features establish ePixUHR as a scalable platform for future ultra-high-rate x-ray detectors requiring real-time computational performance.
Speaker: Larry Ruckman (SLAC National Accelerator Laboratory)
-
09:40
-
11:00
→
11:30
Coffee break 30m
-
11:30
→
12:30
Mini Orals Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
-
11:30
12 Modular Hardware, Firmware, and Control-System Integration: Lessons from Commissioning the Fast Beam Interlock System at ESS 20m
The Fast Beam Interlock System (FBIS) at the European Spallation Source (ESS) is a distributed, safety-critical system that collects status information from multiple accelerator subsystems and, when unsafe conditions arise, stops both the machine and the proton beam within microseconds. During the ESS commissioning process, from integration testing to full beam operation, FBIS was developed using a modular approach for hardware, firmware, and control system interfaces. Standardized mTCA crates and cPCI platforms enabled the sharing and reuse of hardware components across different interlocking and diagnostic systems, simplifying development and maintenance. Each parameterized firmware release is validated with rigorous laboratory testing that mirrors simulation methodologies, ensuring reliable system performance before deployment and efficient management of diagnostic data and events during operations. System timing ensures precise alignment of diagnostic data across FBIS crates. This paper summarizes the practical and architectural lessons learned from the implementation of FBIS, highlighting how the modular design of hardware and firmware enabled flexible adaptation, reliable operation, and a modular control system level during machine commissioning.
Speaker: Dr Stefano Pavinato (European Spallation Source) -
11:50
16 Towards Virtualized 100+ Gbps Data Acquisition Software Systems 20m
Modern detectors in scientific infrastructures yield data at higher rates and in larger volumes. Despite the extensive data reduction on detector electronics, scientists employ software-based functions, e.g., high-level triggers, for further online data reduction that run on a dedicated computer cluster located at the scientific infrastructure's site. As a consequence, scaling, operating, and maintaining the stability and performance of this cluster becomes a demanding responsibility that requires extensive time and manpower. This paper proposes Data Acquisition Functions Virtualization (DFV), a new paradigm to minimize these operational efforts by eliminating computer clusters in scientific infrastructures and by running software-based online data reduction functions on widely available general-purpose campus computing facilities. DFV leverages computer virtualization and high-performance Ethernet networking to isolate software-based functions on campus computing facilities while sustaining their required input throughput. We explore the key technical challenges to realize DFV and propose the Data Acquisition Development Kit (DQDK), a novel framework for high-performance Ethernet-based readout functions and a cornerstone in DFV. We quantify the performance of the framework with and without computer virtualization, considering different computer virtualization setups. The new data acquisition paradigm is applied to the TRISTAN upgrade of the KATRIN experiment at the Karlsruhe Institute of Technology. The framework can reduce CPU resources by a factor of 2.67x and save up to 15% of the consumed energy.
Speaker: Dr Jalal Mostafa -
12:00
105 A Timing-Oriented Pixel Detector ASIC With Delay-Chain-Based Outputs for Three-Dimensional Particle Track Reconstruction 20m
With the growing demand for real-time particle detection and precise event reconstruction, pixel detector readout chips with accurate timing capability for time-resolved particle tracking have become increasingly important. This work presents an energy- and timing-oriented pixel detector ASIC for three-dimensional particle track reconstruction.
The chip is fabricated in a GSMC 130~nm CMOS process and implements a $10 \times 10$ pixel array, occupying a total chip area of $1~\mathrm{mm} \times 1.5~\mathrm{mm}$, with each pixel measuring $90~\mu\mathrm{m} \times 50~\mu\mathrm{m}$. Two on-chip delay chains form the basis of the time measurement scheme, in which three selectable delay stages provide nominal delays of 812~ps, 7.18~ns, and 14.36~ns, respectively, achieving an intrinsic timing resolution of 278.4~ps. Each pixel integrates an analog front-end circuit composed of a low-noise charge-sensitive amplifier, a comparator, and a tunable delay module. The analog front-end achieves an equivalent noise charge of 13.9~$e^{-}$ and a charge-to-voltage conversion gain of 30.5~$\mu\mathrm{V}/e^{-}$. A digital readout circuit implements row--column rolling-shutter scanning across the pixel array.
Energy information is digitized by an external ADC, while timing information from on-chip tapped delay-chain outputs is processed in an FPGA to extract time-of-arrival values. The FPGA-processed data are transferred to a host computer for three-dimensional particle track reconstruction. Simulation results based on the proposed architecture demonstrate the feasibility of time-resolved three-dimensional particle tracking.Speaker: chunlai Dong -
12:00
108 Benchmarking SRO Performance of CAEN Digitizers 20m
The CAEN x2740/x2745, x2730, and x2751 digitizers belong to the Digitizer 2.0 family and are designed to meet the high data rate requirements of modern nuclear and particle physics experiments, medical imaging systems, and large-scale detector readouts. These platforms support both triggered acquisition and continuous streaming readout, integrating high-speed Flash ADCs, FPGA-based real-time processing, large DDR4 buffers, and native USB 3.1, 1 GbE TCP, and 10 GbE UDP connectivity. This work provides a performance evaluation of streaming readout architectures. Hardware benchmarks were performed employing triggered and streaming readout firmware in order to investigate saturation behavior and identify throughtput limitations. In triggered readout with raw waveform transmission over 10 GbE UDP, data rates close to 1.1 GB/s were achieved with no packet loss. In list-mode streaming readout with onboard processing, the maximum event rate is limited by internal FPGA event-sorting algorithm rather than by network bandwidth. Software benchmarks using CAEN FELib and the CoMPASS DAQ software highlight additional decoding and processing bottlenecks.
Speaker: Mr Carlo Tintori (CAEN S.p.A.) -
12:00
132 Millimeter-Wave-Based Wireless Transmission System for CEPC Detector 20m
To meet the Circular Electron-Positron Collider (CEPC) Detector’s demand for low-material-budget and high-reliability data transmission, this study proposes a 60 GHz millimeter-wave (mm-wave) transmission system. The core transceiver chips were verified radiation-tolerant: they maintained stable communication under 7 Mrad X-ray irradiation (5.5 h) and 1.2×1012 neq/cm² neutron irradiation (21 h), satisfying CEPC’s harsh radiation requirements. Optimized with low-noise amplifiers and patch antennas (replacing horn antennas), the system achieved a maximum transmission rate of 6.25 Gbps at 67.5 cm, with negligible crosstalk with the Taichu3 Vertex Pixel prototype chip. A multi-channel system integrated via flexible PCBs realized error-free transmission (BER < 1e-14) at up to 18.75 Gbps over 30 cm, offering a scalable, low-cable solution for CEPC and other high-energy physics experiments.
Speaker: Dr Jun Hu (Institute of High Energy Physics, Chinese Academy of Sciences) -
12:00
137 Precision analysis and development of timing measurement ASICs for strip LGAD readout 20m
For the proposed Circular Electron Positron Collider (CEPC) research and development program, the Outer Tracker (OTK) plans to utilize AC-coupled Low-Gain Avalanche Detector (AC-LGAD) microstrip sensors to achieve high-precision spatial measurements (10 µm) and timing measurements (50 ps). The initial prototype of the strip LGAD readout ASIC, named LATRIC0, is a single-channel design that includes a front-end amplifier and has been fabricated using 55-nm CMOS technology. Measurement results indicate that the prototype achieves bin sizes of approximately 30 ps, a time precision of less than 1 Least Significant Bit (LSB), and a power consumption of less than 6.3 mW per channel at a 1-MHz event rate, thereby preliminarily meeting the detector requirements. The second iteration, LATRIC8, features eight channels with a pitch of 100 μm and incorporates optimization to the TDC core to address issues identified during LATRIC0 testing. Testing for LATRIC8 is expected to begin in early March. This report will focus on precision measurements, analysis, and optimization of LATRIC0, as well as preliminary tests on LATRIC8. The analysis, design optimizations, and measurement results will be presented.
Speakers: Chuanye Wang (Nanjing University), Xiaoting Li (IHEP) -
12:00
140 Development of the SAMIDARE Board: A New Acquisition System for TPCs 20m
In the field of accelerator-based nuclear physics, significant progress has been made in increasing beam intensities. Consequently, acquisition systems must keep pace with the rising rate of events and the number of channels of increasingly complex experimental setups. Time projection chambers usually feature thousands of channels and in exotic nuclei facilities are often used in an active target configuration to allow inverse kinematic reactions on thicker targets. Beyond the high number of channels, such a configuration adds to the DAQ complexity the problem of distinguishing between the beam-induced background and the recoil signal.
Following the obsolescence of the de-facto standard General Electronics for TPCs (GET), the SPADI Alliance started the development of a new acquisition board for TPC signals: the SAMIDARE Board. Although designed for potential future streaming readout purposes, the current first application aims to reduce data throughput via on-device FPGA filtering. In this contribution, an overview of the device and the prototype firmware will be presented.
Speaker: Claudio Santonastaso (RIKEN Nishina Center) -
12:00
142 A High-precision Frequency-adaptive Clock Synchronization System 20m
In accelerator facilities, hundreds of devices are distributed over large areas. To ensure synchronous operation of all components, a high-precision clock distribution and synchronization system is required. However, due to the use of specific clock frequencies and the increasingly stringent synchronization accuracy requirements of modern accelerator systems, conventional approaches based on the standard White Rabbit technique are no longer sufficient. This work presents a high-precision, frequency-adaptive clock distribution and synchronization system based on high-speed transceivers and an improved Direct Digital Synthesis (DDS) technique implemented on a high-performance FPGA. By exploiting the precise delay adjustment capability of the FPGA high-speed transceivers and employing transceiver delay fixing techniques, high-precision reference clock distribution and synchronization are achieved. In addition, a high-resolution FPGA Time-to-Digital Converter (TDC) is used to extract the frequency and phase information of the input signal in a fully digital manner, eliminating the need for complex analog circuit at the master node. In conjunction with waveform recovery at the slave nodes, self-adaptive, high-precision distribution and synchronization of arbitrary-frequency clocks are realized. Test results show that the proposed system achieves a synchronization accuracy better than 15 ps for specific-frequency clock distribution. Over a temperature variation of 50 °C, the phase variation between nodes remains within 20 ps, while reliable data and commands transmission are also achieved.
Speaker: Jiajun Qin (University of Science and Technology of China (CN)) -
12:00
188 EJFAT - ESnet / Jefferson Lab FPGA Accelerated Transport. Design Improvements and User Experiences. 20m
EJFAT ( ESnet / Jefferson Lab's FPGA Accelerated Transport ) protocol is a scalable, terabit-scale streaming and load-balancing protocol for transporting DAQ ( Data Acquisition ) samples into a cluster of compute resources, over the wide-area network. This entails unique design choices in the protocol's design. In this poster session, we present that design, the choices that need to be made, and the rationale behind them. We also present the current state of our deployment and three scientific use cases across several Department of Energy ( DOE ) labs, and supercomputing centres. We introduce improvements in the frame format, Reed Solomon FEC protection, multi-user resource allocation, and E2SAR the primary API and protocol for originating and terminating data flows into EJFAT. All EJFAT source code is available as open source. EJFAT is also operated as a live service by ESnet for use by DOE scientists and international collaborators.
Speaker: Yatish Kumar -
12:00
22 FPGA-Based Front-End Electronics for AC-LGAD Detector 20m
The Low Gain Avalanche Detector (LGAD) is a high-precision silicon-based timing detector known for its excellent signal-to-noise ratio and a typical gain of 10–50. The AC-coupled Low Gain Avalanche Detector (AC-LGAD), evolved from the LGAD, represents the latest generation of high-precision 4D detectors capable of simultaneously measuring both the time and position of particles.
We proposed a 16-channel front-end electronics readout system (FEE) for AC-LGADs. For analog part, it consisted of a radio-frequency preamplifier and a high-speed discriminator in each channel. All discriminated signals were fed to an FPGA. In the FPGA, 16 channels of Time-to-Digital Converters (TDC) were implemented, based on the tapped delay line (TDL) structure. Test results showed that ~6ps timing resolution for the TDC could be achieved.
The FEE system currently provides time-of-arrival (TOA) measurement. High-precision time-over-threshold (TOT) module is under development. With a pulse generator, the intrinsic timing resolution for the FEE is measured. The timing resolution for one channel in the FEE is about 10 ps. The timing resolution for the AC-LGAD detectors with our customized FEE can reach up ~40 ps. The resolution can be further improved after the TOT calibration in the future.Speaker: Ms Liyan Jin (Shandong University) -
12:00
32 A Reliable Data Processing Board for the front-end electronics of the Hyper-Kamiokande Experiment 20m
The front-end electronics in the Far Detector of the Hyper-Kamiokande experiment are undergoing an upgrade in performance and complexity in comparison to its predecessor Super-Kamiokande. The electronics that read out the PMTs (20,000 Hamamatsu R12860) must be mounted inside sealed steel vessels in water. The inability to service the modules after installation, together with the need for more complex electronics, requires the development of a new board that will aggregate the data coming from the digitizer boards and transmit them through optical links to the DAQ, sitting outside the detector. Furthermore, this board must have high reliability to meet a maximum of 10% of failing units after 20 years of operation. Through a careful selection of components together with redundant elements, the target reliability is reached. This work describes the hardware design of the Data Processing Board (DPB), a System on Chip design that leverages a SOM-baseboard (Sytem On Module) layout that combines the versatility of the FPGAs with the power of CPUs, allowing both data taking and system monitoring for each of the one thousand vessels present in the Hyper-Kamiokande Far Detector. This design has already been tested and iterated upon by manufacturing pre-series units and is entering into mass production of one thousand units.
Speaker: Alejandro Gómez Gambín (Universitat Politècnica de València) -
12:00
38 Readout Electronics System for the CsI(Tl) Calorimeter of the VLAST-P Payload 20m
The VLAST-P (Very Large Area gamma-ray Space Telescope-Pathfinder) is a miniaturized space detector designed to detect high-energy gamma-rays from the Sun (in the MeV to GeV energy range), as well as high-energy protons from solar flares. The CsI(Tl) calorimeter is a crucial part of the VLAST-P payload for measuring the energy of incoming high-energy photons and providing the hit information from cosmic rays within the calorimeter for the trigger selection of effective events. An electronics system consisting of 4 FEEs (Front End Electronics) modules and 1 PAM (Pre-Amplifier Module) with a total power consumption of about 22 W, has been developed. Signals from the Avalanche photodiodes (APD) are amplified and shaped by CSA (Charge Sensitive Amplifier), then split into high and low electronics gains to achieve a large dynamic range. The FEEs digitize analog signals from a total 100 channels, transmits the scientific data to the payload electronics control unit, and provides hit signal translation. To assure long-term reliability in the harsh space environment, radiation hardness, thermal design, components and board level quality control were carefully considered. In addition, the key indexes of energy linearity, noise level, and dynamic range were preliminarily studied. Test results showed that the system level ENC (Equivalent Noise Charge) for each high-gain channel is below 4 fC, and the linear measurement upper limit of each low-gain channel reaches about 100 pC, both of which meet the physical requirements of the detector.
Speaker: Qian Chen (University of Science and Technology of China (CN)) -
12:00
39 Long-Time Coherent Integration for High Precision Weak Signal LiDAR on Lunar Rover 20m
Light Detection and Ranging (LiDAR) is a key enabling technology for autonomous navigation and obstacle avoidance in lunar rovers, where Time-of-Flight (ToF) LiDAR provides centimeter-level ranging accuracy by measuring the temporal delay between emitted laser pulses and their reflected echoes. However, the lunar environment poses severe challenges, including low surface reflectivity, intense solar background radiation due to the absence of an atmosphere, and strict constraints on power consumption and thermal dissipation, resulting in very low echo signal-to-noise ratios (SNR). To address these challenges, this work presents a weak-signal LiDAR system that employs an avalanche photodiode (APD) for echo detection and an FPGA-based backend implementing long-time coherent integration (LTCI) for signal processing. The proposed method achieves centimeter-level ranging accuracy even when the echo SNR is below unity. In addition, by dynamically adjusting the APD bias voltage, the system maintains a wide dynamic range for strong return signals, achieving an overall dynamic range of up to 60 dB.
A prototype system has been realized, integrating a mode-locked laser, an avalanche photodiode (APD) detector, a 1 GSa/s analog-to-digital converter (ADC), and an FPGA-based digital backend. All signal processing is executed on the FPGA, facilitating precise timing extraction and achieving centimeter-level ranging accuracy. The proposed architecture provides a robust and energy-efficient framework for high-precision LiDAR operation specifically designed for lunar rovers, addressing the extreme low-SNR conditions of the lunar surface environment.
Speaker: Wenhao Duan (the Department of Modern Physics, University of Science and Technology of China, Hefei 230026, China) -
12:00
40 A Real-Time Feed-Forward System for Heralded Quantum Entanglement 20m
Heralded quantum entanglement relies on stochastic
single-photon detection events to establish correlations between
remote qubits. The phase of the generated entangled state
depends on the photon arrival time, making each heralding event
associated with a different, time-dependent phase. Deterministic
use of the entanglement thus requires real-time feed-forward op-
erations that correct this detection-time-dependent phase before
decoherence occurs. We present a hardware-based, event-driven
feed-forward system that addresses this requirement. The system
processes dual asynchronous single-photon detection channels
and implements programmable temporal acceptance windows at
the tens-of-picoseconds scale. Photon arrival times are measured
via time-to-digital conversion (TDC), and events falling within
the configured windows trigger conditional feed-forward control
signals for phase correction on remote qubits.Speaker: Zhizhen Qin (Department of Modern Physics, University of Science and Technology of China) -
12:00
43 Tapped Delay Lines Time-to-digital converter design and performance in Versal architecture 20m
Time-to-digital converters (TDCs) are a critical part of the acquisition chain to produce precise time measurements for radiation detection in medical imaging and high-energy physics. Research on TDC design in FPGA is often focused on digital means to improve resolution, precision and nonlinearity, such as using wave union or multiple delay lines. Less scrutiny is given to making sure that the tapped delay line (TDL) circuit itself is optimized through careful placement and routing. This study provides a model linking FPGA design parameters, namely, the delay through the TDL and clock skew to precision and nonlinearity. From this model, design principles are identified and used to produce four different circuit optimizations for the TDL. The effects of these optimizations on bin width and precision were studied both in simulation and measured experimentally on an UltraScale+ FPGA. These optimizations remove all ultrawide bins from the TDL. This led to an improvement of the RMS precision of a two-channel TDC from 4.7 ps to 2.5 ps. The produced model and simulator facilitate design choices and accelerate the optimization and validation of TDL-based TDCs.
Speaker: Julien Rossignol (University of California, Davis) -
12:00
55 Progress in Readout Electronics for Scintillator-Based Multi-Neutron Detector Array 20m
The study of the structure and reactions of neutronrich unstable nuclei using radioactive ion beams (RIBs), and the exploration of exotic nuclear phenomena near the limits of stability, represent a frontier topic in contemporary nuclear physics. This highlights the significant importance of investigating multineutron systems and their correlations. Multi-neutron detection devices, which offer high resolution and high neutron detection efficiency, have been widely deployed in major nuclear science facilities around the world as essential technological means for conducting this research. A scalable readout electronic scheme for a plastic-scintillator-based multi-neutron detector array has been designed and developed, featuring high time resolution, high event-rate capability and high integration. To realize high granularity detection, silicon photomultipliers (SiPMs) are employed as photon sensors, together with dedicated readout electronics. In order to achieve a highly integrated readout for massive SiPM channels on the plastic scintillator array, a system architecture based on Application-Specific Integrated Circuit (ASIC) chips is developed to read out and aggregate from massive channels. Functional verification and system integration tests have been completed. The readout electronic scheme adopts a highly integrated design integrated with the detector, while the front-end electronics are implemented with a separated architecture. Based on the readout system, a small-scale plastic scintillator array is constructed and tested, achieving a timing resolution of 110 ps and a spatial resolution of 1.2 cm.
Speaker: Chengjun Zhang (University of Science and Technology of China) -
12:00
57 Extension of a wired time-synchronization protocol for sub-nanosecond accuracy to multiple FPGA families 20m
In Japan, a community of users in experimental physics has been actively developing and deploying a generic streaming DAQ system. This system requires precise time synchronization among FPGAs in the front-end electronics and employs an original time-synchronization protocol implemented on FPGAs using general I/O pin pairs. Previously, the protocol was available only on AMD Kintex-7 FPGAs, which significantly limited the generality of the system. To address this limitation, we investigated differences between FPGA families and introduced two extensions.
First, the protocol was extended to operate on both the AMD 7-series and UltraScale FPGA families. The protocol utilizes the IDELAY and IOSERDES primitives; however, because different generations of these primitives are employed in the two families, their functionality and control schemes differ. To address this issue, we modified the surrounding circuitry of the primitives in the UltraScale family to achieve behavior equivalent to that of the AMD 7-series.
Second, differences in input signal delays introduced by the ISERDES and IDELAY primitives between two FPGAs were addressed, since accurately determining these differences is essential for precise time synchronization. Although these delay differences include device-specific components, they were previously neglected because communication was limited to identical devices. By explicitly determining the device-specific delays, the protocol was updated to enable time synchronization between different devices.
The updated time-synchronization system was implemented on various FPGA devices, and an accuracy within approximately 300 ps was achieved across all device combinations, confirming that the performance attained prior to the update was fully preserved.
Speaker: Mr Eitaro Hamada (KEK IPNS) -
12:00
6 Performance evaluation of ITER Synchronous Data Network using Time Sensitive Network 20m
The Synchronous Databus Network (SDN) is the real-time communication protocol used in the Instrumentation and Control (I&C) systems of the ITER nuclear fusion reactor. The goal is to guarantee a maximum latency of 50 µs between two computers through a dedicated 10 Gbps network. This contribution evaluates SDN performance in a 1 Gbps network using two optimization strategies: Time Sensitive Networking (TSN) and real-time enhancements to the Linux kernel. The experimental setup involves two computers connected via a TSN-enabled switch and synchronized using a grandmaster clock through the Generalized Precision Time Protocol (gPTP, IEEE 802.1AS). TSN protocols such as IEEE 802.1Qbv and VLAN priority tagging (IEEE 802.1Q) are used to isolate SDN traffic in dedicated NIC queues, minimizing interference from low-priority traffic. Linux kernel version 6.12 is configured with PREEMPT RT patches to improve system determinism. A CPU core is isolated for SDN processes, and real-time scheduling (SCHED_FIFO) ensures priority execution. The SDN library is modified to support TSN queuing and enable timestamp tracing via socket options, without altering its core functionality. Performance was measured across multiple configurations. The most optimized setup achieved an average round-trip latency of $99.952 \mu s$ with a deviation of $2.278 \mu s$. Under saturated traffic, latency rose to $214.018 \mu s$ with a $27.259 \mu s$ deviation. These results confirm that the combination of TSN, utilizing a 1 Gbps network, and real-time Linux optimizations enables latency to be limited to values that approximate the performance achieved using SDN over 10 Gbps.
Speaker: David Andrino-Izquierdo (Universidad Politécnica de Madrid)
-
11:30
-
12:30
→
14:45
Lunch break 2h 15m
-
14:45
→
15:45
Poster session: Emerging Technologies, New Standards, Feedback on Experience & Industry
-
14:45
Modular Hardware, Firmware, and Control-System Integration: Lessons from Commissioning the Fast Beam Interlock System at ESS 20m
The Fast Beam Interlock System (FBIS) at the European Spallation Source (ESS) is a distributed, safety-critical system that collects status information from multiple accelerator subsystems and, when unsafe conditions arise, stops both the machine and the proton beam within microseconds. During the ESS commissioning process, from integration testing to full beam operation, FBIS was developed using a modular approach for hardware, firmware, and control system interfaces. Standardized mTCA crates and cPCI platforms enabled the sharing and reuse of hardware components across different interlocking and diagnostic systems, simplifying development and maintenance. Each parameterized firmware release is validated with rigorous laboratory testing that mirrors simulation methodologies, ensuring reliable system performance before deployment and efficient management of diagnostic data and events during operations. System timing ensures precise alignment of diagnostic data across FBIS crates. This paper summarizes the practical and architectural lessons learned from the implementation of FBIS, highlighting how the modular design of hardware and firmware enabled flexible adaptation, reliable operation, and a modular control system level during machine commissioning.
Speaker: Dr Stefano Pavinato (European Spallation Source) -
15:05
Towards Virtualized 100+ Gbps Data Acquisition Software Systems 20m
Modern detectors in scientific infrastructures yield data at higher rates and in larger volumes. Despite the extensive data reduction on detector electronics, scientists employ software-based functions, e.g., high-level triggers, for further online data reduction that run on a dedicated computer cluster located at the scientific infrastructure's site. As a consequence, scaling, operating, and maintaining the stability and performance of this cluster becomes a demanding responsibility that requires extensive time and manpower. This paper proposes Data Acquisition Functions Virtualization (DFV), a new paradigm to minimize these operational efforts by eliminating computer clusters in scientific infrastructures and by running software-based online data reduction functions on widely available general-purpose campus computing facilities. DFV leverages computer virtualization and high-performance Ethernet networking to isolate software-based functions on campus computing facilities while sustaining their required input throughput. We explore the key technical challenges to realize DFV and propose the Data Acquisition Development Kit (DQDK), a novel framework for high-performance Ethernet-based readout functions and a cornerstone in DFV. We quantify the performance of the framework with and without computer virtualization, considering different computer virtualization setups. The new data acquisition paradigm is applied to the TRISTAN upgrade of the KATRIN experiment at the Karlsruhe Institute of Technology. The framework can reduce CPU resources by a factor of 2.67x and save up to 15% of the consumed energy.
Speaker: Dr Jalal Mostafa -
15:25
An FPGA-based Streaming DAQ emulator for evaluating and tuning DAQ networks and data management infrastruc-ture. 20m
As experiment data volumes in Nuclear Physics continue to rise – by some estimates up to 3 orders of magnitude this decade - the development of data streaming and data management capabilities has taken on new importance. Traditionally Data Acquisition (DAQ) readout has been facilitated through software embedded in the instrument or direct register readout via bus systems such the Versa Module Eurocard (VME) architecture. Recently, DAQs in Nu-clear Physics have begun implementing streaming readout in which digitizer instrumentation sends data over the network to data management computing clusters for processing and storage.
This paradigm shift now means that DAQ network infrastructure and data management systems have become a criti-cal link in the next generation of scientific instruments. Both the choice of network hardware and its topology influ-ence data rates and tolerance for failures. Software tools such as iperf3 and TRex are often used to evaluate these networks. While they provide configurable traffic generation, these tools require extensive configuration to emu-late real multi-channel correlated DAQ streams. They can also lack the timing precision native to ASIC or FPGA-based systems used in real-time DAQs.
To address these issues, we present “Solidago” - an FPGA-based streaming DAQ emulator capable of streaming 160 Gbps of synthetic digitizer events with the same timing characteristics of a clock-synchronized digitizer array. We characterize Solidago’s streaming capabilities and compare it to real streaming digitizers and current software emu-lation tools. We further demonstrate the practical application of Solidago to stress-test and tune DAQ networks and data management systems.Speaker: Jeff Maggio (SkuTek Instrumentation) -
15:25
EJFAT - ESnet / Jefferson Lab FPGA Accelerated Transport. Design Improvements and User Experiences. 20m
EJFAT ( ESnet / Jefferson Lab's FPGA Accelerated Transport ) protocol is a scalable, terabit-scale streaming and load-balancing protocol for transporting DAQ ( Data Acquisition ) samples into a cluster of compute resources, over the wide-area network. This entails unique design choices in the protocol's design. In this poster session, we present that design, the choices that need to be made, and the rationale behind them. We also present the current state of our deployment and three scientific use cases across several Department of Energy ( DOE ) labs, and supercomputing centres. We introduce improvements in the frame format, Reed Solomon FEC protection, multi-user resource allocation, and E2SAR the primary API and protocol for originating and terminating data flows into EJFAT. All EJFAT source code is available as open source. EJFAT is also operated as a live service by ESnet for use by DOE scientists and international collaborators.
Speaker: Yatish Kumar -
15:25
Keeping track of the Tracker: The ITk Production Database 20m
The ATLAS experiment will be upgraded within the next decade for the high luminosity LHC upgrade. The high pile-up interaction environment (on average 200 interactions per 40MHz bunch crossing) requires a radiation hard tracking detector with a fast readout. The Inner Tracker (ITk) upgrade is entering the production phase in an international effort to produce more than 27,000 detector modules in institutes around the globe. The ITk Production Database tracks the location of all components and provides a repository for quality control and assurance tests used to monitor production rates and yields. Component information will be retained for 10 years of running in order to support data-taking performance.
Examples will be provided of ITk community tools for PDB population and reporting. General themes of large-scale data management and multi-user global accessibility are now standard to LHC-scale detector production. These concepts are relevant to modern high-energy particle physics and large experiments beyond HEP. The goal of this presentation is to promote information exchange and collaboration of tools which can support production.Speaker: Kenneth Gibb Wraight (University of Glasgow (GB)) -
15:25
Upgrading the Timing System of ASDEX Upgrade 20m
At the fusion experiment ASDEX Upgrade numerous subsystems contribute to plasma discharges (e.g. more than a hundred data acquisition systems). These systems need to operate time-synchronous for successful discharges and a common time base is necessary for the evaluation of measurement data. The current timing system was based on custom build hardware, as no off-the-shelf solution was available at that time. Maintaining this hardware gets increasingly difficult, as many of its electronic components are no longer in stock and finding spare parts or suitable replacements is becoming difficult. Today, network based time synchronization can achieve sub-nanosecond accuracy using the Precision Time Protocol and Synchronous Ethernet or White Rabbit. So with a scheduled renewal of the network infrastructure it was decided to migrate to network based time synchronization. However, due to tight schedules between experimental campaigns, as well as hardware being in operation, that can not be retrofitted to support network based time synchronization, it is not possible to replace the whole timing system at once. Starting with the upcoming 2026 campaign, network based time synchronization will be offered in parallel to the existing timing system, instead. For successful operation, both time domains need to stay synchronous within the 20 nanosecond time resolution of the hardware timing system. This contribution addresses the approach that was implemented to achieve the required synchronization. In addition, an outlook on future work for the integration of non-retrofittable systems will be given. Together, this enables the step-wise replacement of the current timing system.
Speaker: Matthias Gehring (Max-Planck-Institute for Plasma Physics)
-
14:45
-
14:45
→
15:45
Poster session: Front-End Electronics, Fast Digitizers, Fast Transfer Links & Networks
-
14:45
Performance evaluation of ITER Synchronous Data Network using Time Sensitive Network 20m
The Synchronous Databus Network (SDN) is the real-time communication protocol used in the Instrumentation and Control (I&C) systems of the ITER nuclear fusion reactor. The goal is to guarantee a maximum latency of 50 µs between two computers through a dedicated 10 Gbps network. This contribution evaluates SDN performance in a 1 Gbps network using two optimization strategies: Time Sensitive Networking (TSN) and real-time enhancements to the Linux kernel. The experimental setup involves two computers connected via a TSN-enabled switch and synchronized using a grandmaster clock through the Generalized Precision Time Protocol (gPTP, IEEE 802.1AS). TSN protocols such as IEEE 802.1Qbv and VLAN priority tagging (IEEE 802.1Q) are used to isolate SDN traffic in dedicated NIC queues, minimizing interference from low-priority traffic. Linux kernel version 6.12 is configured with PREEMPT RT patches to improve system determinism. A CPU core is isolated for SDN processes, and real-time scheduling (SCHED_FIFO) ensures priority execution. The SDN library is modified to support TSN queuing and enable timestamp tracing via socket options, without altering its core functionality. Performance was measured across multiple configurations. The most optimized setup achieved an average round-trip latency of $99.952 \mu s$ with a deviation of $2.278 \mu s$. Under saturated traffic, latency rose to $214.018 \mu s$ with a $27.259 \mu s$ deviation. These results confirm that the combination of TSN, utilizing a 1 Gbps network, and real-time Linux optimizations enables latency to be limited to values that approximate the performance achieved using SDN over 10 Gbps.
Speaker: David Andrino-Izquierdo (Universidad Politécnica de Madrid) -
15:05
A Reliable Data Processing Board for the front-end electronics of the Hyper-Kamiokande Experiment 20m
The front-end electronics in the Far Detector of the Hyper-Kamiokande experiment are undergoing an upgrade in performance and complexity in comparison to its predecessor Super-Kamiokande. The electronics that read out the PMTs (20,000 Hamamatsu R12860) must be mounted inside sealed steel vessels in water. The inability to service the modules after installation, together with the need for more complex electronics, requires the development of a new board that will aggregate the data coming from the digitizer boards and transmit them through optical links to the DAQ, sitting outside the detector. Furthermore, this board must have high reliability to meet a maximum of 10% of failing units after 20 years of operation. Through a careful selection of components together with redundant elements, the target reliability is reached. This work describes the hardware design of the Data Processing Board (DPB), a System on Chip design that leverages a SOM-baseboard (Sytem On Module) layout that combines the versatility of the FPGAs with the power of CPUs, allowing both data taking and system monitoring for each of the one thousand vessels present in the Hyper-Kamiokande Far Detector. This design has already been tested and iterated upon by manufacturing pre-series units and is entering into mass production of one thousand units.
Speaker: Alejandro Gómez Gambín (Universitat Politècnica de València) -
15:25
A 10-ps Resolution Direct-Digitizing Time Measurement ASIC for Fast Detectors 20m
In nuclear and particle physics experiments, high-precision time measurement is a core technology for applications such as Time-of-Flight (TOF). Fast detectors, represented by Multi-gap Resistive Plate Chambers (MRPCs), are widely used in large-scale and high-resolution detector systems due to their excellent timing performance. However, their weak and fast output signals, combined with the integration limitations of traditional architectures that separate the analog front-end from digital quantization, pose severe challenges to readout electronics. To address this, an 8-channel high-precision time measurement ASIC, featuring monolithic integration of the analog front-end and digital back-end, was designed and implemented in a 180 nm CMOS process. The chip integrates pre-amplifiers, discriminators, Time-to-Digital Converters (TDCs), and a Phase-Locked Loop (PLL). The pre-amplifier employs a Regulated Cascode (RGC) topology to achieve both high bandwidth and low noise, while providing a stable and low input impedance for matching with various readout applications. The discriminator utilizes a differential saturated amplification structure, featuring low noise and low jitter. The TDC is based on a two-step vernier architecture, effectively balancing timing resolution, power consumption, and dead time. Furthermore, an on-chip low-jitter PLL based on a Multiplying Delay-Locked Loop (MDLL) provides a high-stability reference clock for the TDC. Test results from the fabricated chip show a stable input impedance as low as 15 Ω and a timing precision of 10 ps with a 100 fC input charge, validating the capability of the proposed monolithic solution for high-precision readout.
Speaker: Zhuang Li (University of Science and Technology of China (CN)) -
15:25
A 25 Gbps VCSEL Driving ASIC for Detector Front-end Readout 20m
This paper presents the design and the test results of a 25 Gbps VCSEL driving ASIC fabricated in a 55 nm CMOS technology for detector front-end readout. This VCSEL driving ASIC is composed of an input equalizer stage, a pre-driver stage and a novel output driver stage. The input equalizer stage adopts a 5-step CTLE structure to compensate the high frequency loss at the PCB traces, bonding wires and input pads. It can boost maximum up to 5.8 dB at 18 GHz while providing a DC gain of 10.7 dB. To meet both the gain/bandwidth requirements and the area restriction, the pre-driver stage adopts the inductor-shared peaking technology and the active feedback structure. The total gain and the overall bandwidth of the pre-driver stage are better than 18 dB and 19.5 GHz at all process corners, respectively. The proposed output driver stage uses the double feedforward capacitor compensation, T-coil technique and the adjustable FFE pre-emphasis technique to improve the bandwidth. This VCSEL driving ASIC has been integrated in a customized optical module with a VCSEL array. Both the electrical function and the optical performance have been fully evaluated. The output optical eye diagram has passed the eye mask test at the data rate of 25 Gbps. The peak-to-peak jitter of 25 Gbps optical eye is 21.7 ps and the RMS jitter is 3.3 ps.
Speakers: Cong Zhao, Prof. Di Guo (Central China Normal University) -
15:25
A High-precision Frequency-adaptive Clock Synchronization System 20m
In accelerator facilities, hundreds of devices are distributed over large areas. To ensure synchronous operation of all components, a high-precision clock distribution and synchronization system is required. However, due to the use of specific clock frequencies and the increasingly stringent synchronization accuracy requirements of modern accelerator systems, conventional approaches based on the standard White Rabbit technique are no longer sufficient. This work presents a high-precision, frequency-adaptive clock distribution and synchronization system based on high-speed transceivers and an improved Direct Digital Synthesis (DDS) technique implemented on a high-performance FPGA. By exploiting the precise delay adjustment capability of the FPGA high-speed transceivers and employing transceiver delay fixing techniques, high-precision reference clock distribution and synchronization are achieved. In addition, a high-resolution FPGA Time-to-Digital Converter (TDC) is used to extract the frequency and phase information of the input signal in a fully digital manner, eliminating the need for complex analog circuit at the master node. In conjunction with waveform recovery at the slave nodes, self-adaptive, high-precision distribution and synchronization of arbitrary-frequency clocks are realized. Test results show that the proposed system achieves a synchronization accuracy better than 15 ps for specific-frequency clock distribution. Over a temperature variation of 50 °C, the phase variation between nodes remains within 20 ps, while reliable data and commands transmission are also achieved.
Speaker: Jiajun Qin (University of Science and Technology of China (CN)) -
15:25
A low jitter 2.56 Gbps reference-less CDR ASIC in 55 nm for NICA Multi-Purpose Detector Project 20m
Nuclotron-based ion collider facility (NICA) is a new accelerator complex designed at the joint institute for nuclear research to study properties of dense baryonic matter. The multi-purpose detector (MPD) is one of three detectors in NICA. The bi-directional serial optical data transceiver system is employed between the front-end and the back-end in the detector readout electronics. The low jitter clock data recovery (CDR) ASIC is one of the key components in the high-speed serial down link direction. This paper presents the design and the test results of a low jitter 2.56 Gbps reference-less CDR ASIC for NICA MPD project. The CDR ASIC consists of an input equalizer stage, a bang-bang phase detector (BBPD), a charge pump circuit (CP), a low-pass filter (LPF), a LC voltage-controlled oscillator (LC-VCO) circuit and a SPI module.
The CDR ASIC has been fabricated in a 55 nm CMOS process. The phase noise test results show that the CDR ASIC outputs the 2.56 GHz clock with a phase noise of -110 dBc/Hz at 1 MHz offset and a rms jitter of 857 fs. The logic test results show that the recovered 2.56 Gbps data is correct and the BER less than 10−12 is achieved in all tests.Speakers: Cong Zhao, Prof. Di Guo (Central China Normal University) -
15:25
A prototype detector for photon timing measurement with MCP-PMT 20m
The precise detection of photon arrival times is a fundamental capability that enables progress in diverse scientific and technological fields, including medical imaging, high-energy physics, and astrophysics. The pursuit of higher resolution, improved efficiency, and reduced noise in these applications drives the ongoing development of advanced photodetectors. Among them, the micro-channel plate photomultiplier tube (MCP-PMT) is a leading technology, recognized for its excellent single-photon sensitivity and ultrafast temporal response, with a rise time of less than 100 ps. It also provides a low transit time spread, typically around 10 ps for multi-photoelectron events and below 50 ps for single photoelectrons. To achieve picosecond-level timing resolution, we have developed a prototype detector named the iPMT (intelligent PhotoMultiplier with aTDC readout), designed for precise measurement of single-photon arrival times using an MCP-PMT. The prototype combines a single-anode MCP-PMT with a boost converter, a voltage divider, and precision timing electronics. A custom ASIC, fabricated in a 55 nm CMOS process and designated FPMROC, is integrated into the design. This chip contains a low-noise preamplifier, a discriminator, and a time-to-digital converter (TDC) capable of measuring both the time of arrival and the time over threshold. In initial tests, the prototype demonstrated a coincidence time resolution of 62 ps, of which one detector contributes 33 ps and the other contributes 53 ps. This indicates its potential for applications requiring high-precision time-resolved measurements. More detailed testing and analysis will be conducted in the coming months.
Speakers: Xin Wang (Institute of High Energy Physics, Chinese Academy of Sciences (CN)), Xiongbo Yan (Institute of High Energy Physics) -
15:25
A Real-Time Feed-Forward System for Heralded Quantum Entanglement 20m
Heralded quantum entanglement relies on stochastic
single-photon detection events to establish correlations between
remote qubits. The phase of the generated entangled state
depends on the photon arrival time, making each heralding event
associated with a different, time-dependent phase. Deterministic
use of the entanglement thus requires real-time feed-forward op-
erations that correct this detection-time-dependent phase before
decoherence occurs. We present a hardware-based, event-driven
feed-forward system that addresses this requirement. The system
processes dual asynchronous single-photon detection channels
and implements programmable temporal acceptance windows at
the tens-of-picoseconds scale. Photon arrival times are measured
via time-to-digital conversion (TDC), and events falling within
the configured windows trigger conditional feed-forward control
signals for phase correction on remote qubits.Speaker: Zhizhen Qin (Department of Modern Physics, University of Science and Technology of China) -
15:25
A Timing-Oriented Pixel Detector ASIC With Delay-Chain-Based Outputs for Three-Dimensional Particle Track Reconstruction 20m
With the growing demand for real-time particle detection and precise event reconstruction, pixel detector readout chips with accurate timing capability for time-resolved particle tracking have become increasingly important. This work presents an energy- and timing-oriented pixel detector ASIC for three-dimensional particle track reconstruction.
The chip is fabricated in a GSMC 130~nm CMOS process and implements a $10 \times 10$ pixel array, occupying a total chip area of $1~\mathrm{mm} \times 1.5~\mathrm{mm}$, with each pixel measuring $90~\mu\mathrm{m} \times 50~\mu\mathrm{m}$. Two on-chip delay chains form the basis of the time measurement scheme, in which three selectable delay stages provide nominal delays of 812~ps, 7.18~ns, and 14.36~ns, respectively, achieving an intrinsic timing resolution of 278.4~ps. Each pixel integrates an analog front-end circuit composed of a low-noise charge-sensitive amplifier, a comparator, and a tunable delay module. The analog front-end achieves an equivalent noise charge of 13.9~$e^{-}$ and a charge-to-voltage conversion gain of 30.5~$\mu\mathrm{V}/e^{-}$. A digital readout circuit implements row--column rolling-shutter scanning across the pixel array.
Energy information is digitized by an external ADC, while timing information from on-chip tapped delay-chain outputs is processed in an FPGA to extract time-of-arrival values. The FPGA-processed data are transferred to a host computer for three-dimensional particle track reconstruction. Simulation results based on the proposed architecture demonstrate the feasibility of time-resolved three-dimensional particle tracking.Speaker: chunlai Dong -
15:25
An 11-Gbps edge-pre-emphasis SST transmitter in 55 nm for detector front-end data transmission 20m
In particle physics experiments, the front-end detector needs to transfer a large volume of data while minimizing power consumption due to the high density of readout channels and limited cooling space. Compared to traditional current mode logic (CML) drivers, source series terminated (SST) drivers exhibit approximately 50% better power efficiency. This paper presents an SST transmitter that utilizes an edge-pre-emphasis method to maintain a high data transmission rate while minimizing power consumption. The proposed design consists of a phase-locked loop (PLL) for clock generation, a serializer for parallel data serialization, an SST driver for serial data output, and a serial peripheral interface (SPI) for configuration. The SST driver integrates a main driving stage to provide an output current of around 5 mA, a programmable pull-up and pull-down transistor array to adapt the output resistance, a programmable edge-pre-emphasis circuit to adjust the emphasis strength, and a programmable delay chain to modify the emphasis duration. The SST driver can achieve an 11-Gbps data rate. The simulated power consumption of the SST driver is typically 14 mA, with a maximum range of 5 to 21 mA, depending on the output resistance, emphasis strength and duration. It can reduce power consumption by a minimum of 44% compared to the CML driver from the previous version. The design, along with a separate test chip dedicated exclusively to the SST driver, has been fabricated using 55-nm CMOS technology. Laboratory tests are expected to begin in late February, and detailed design and measurement results will be presented.
Speakers: Qingkang Wu (Nanjing University, IHEP), Xiaoting Li (IHEP) -
15:25
Benchmarking SRO Performance of CAEN Digitizers 20m
The CAEN x2740/x2745, x2730, and x2751 digitizers belong to the Digitizer 2.0 family and are designed to meet the high data rate requirements of modern nuclear and particle physics experiments, medical imaging systems, and large-scale detector readouts. These platforms support both triggered acquisition and continuous streaming readout, integrating high-speed Flash ADCs, FPGA-based real-time processing, large DDR4 buffers, and native USB 3.1, 1 GbE TCP, and 10 GbE UDP connectivity. This work provides a performance evaluation of streaming readout architectures. Hardware benchmarks were performed employing triggered and streaming readout firmware in order to investigate saturation behavior and identify throughtput limitations. In triggered readout with raw waveform transmission over 10 GbE UDP, data rates close to 1.1 GB/s were achieved with no packet loss. In list-mode streaming readout with onboard processing, the maximum event rate is limited by internal FPGA event-sorting algorithm rather than by network bandwidth. Software benchmarks using CAEN FELib and the CoMPASS DAQ software highlight additional decoding and processing bottlenecks.
Speaker: Mr Carlo Tintori (CAEN S.p.A.) -
15:25
Design and prototyping of the X-ray Fluorescence Spectrometer for online component analysis 20m
Traditional X-ray Fluorescence (XRF) techniques struggle to meet the requirements of low-latency response and continuous online measurement in industrial scenarios. To address this challenge, this work develops a compact, cloud-deployable XRF spectrometer with real-time data acquisition capability for industrial online elemental analysis.
The system integrates a Si-PIN detector with prototyping of the readout electronic (PRE), and employs a Kintex-7 FPGA as the core unit for high-speed digital signal processing. Moving Window Deconvolution (MWD) algorithm is implemented for spectral resolution enhancement and Geant4 simulation is used for air attenuation correction to ensure reliable performance under non-vacuum conditions. By integrating the USR-DR502 module and MQTT protocol, the established 3-tier host workstation system achieves low-latency data synchronization and supports automated measurement workflows, enabling simultaneous local and remote monitoring.
Test results show that the system achieves an energy resolution of 266 eV at 6.40 keV and an integral nonlinearity of 0.15%. Moreover, quantitative analysis of standard Fe-Ca samples yields relative errors within 0.20%, which validates the system's high accuracy and feasibility for practical industrial online applications.Speaker: jiayun chen (山东大学) -
15:25
Design of a Dual-Detector Gamma Spectrometer for In Situ Marine Radioactivity Monitoring 20m
China is steadily advancing the development of nuclear-energy facilities along its coastal regions. This trend increases the demand for real-time, high-precision $\gamma$-ray monitoring of seawater for routine environmental surveillance and emergency preparedness. To address this, a power-efficient, buoy-mounted marine $\gamma$-ray spectrometer was developed that integrates a high-resolution CdZnTe (CZT) detector with a high-efficiency CsI(Tl) scintillation detector. The readout electronics provide full-waveform digitization and an FPGA-based data acquisition (DAQ) architecture. The Kintex-7 FPGA firmware integrates data acquisition and digital signal processing, data storage and transmission, and system control and monitoring, thereby enabling long-term unattended operation. System linearity and noise were evaluated with injected charges of 10–1300 fC, resulting in an integral nonlinearity < 0.062% and a baseline RMS noise < 2.75 ADC counts under battery-powered operation. Using a $^{137}$Cs source, the CZT detector achieved an energy resolution of 2.28% at 662 keV. During an 88-h nearshore deployment off Weihai, China, the system operated continuously. The measured $\gamma$-ray dose rate in seawater ranged from 2.81 to 3.23 nGy$\cdot$h$^{-1}$, and the system successfully resolved characteristic peaks from naturally occurring radionuclides, including $^{40}$K and nuclides from the uranium and thorium decay series. These results demonstrate that the proposed dual-detector spectrometer, together with its power-efficient electronics and DAQ architecture, satisfies the requirements for in situ radiation monitoring in complex marine environments. This work provides a basis for deploying a networked system of marine radiation-monitoring stations.
Speaker: Mr Tao Liu (Shandong University) -
15:25
Design of a Low-Power Electronics System for Low Energy X-ray Polarization Detection Based on Multi-Detector Units 20m
The primary scientific objective of the POLAR-2 experiment on the China Space Station is to perform wide-field survey observations of X-ray transients, such as Gamma-ray Bursts (GRBs). As one of its three core payloads, the Wide-field Low-energy X-ray Polarization Detector (LPD) utilizes a Gas Pixel Detector (GPD) based on pixel chips (Topmetal-L) for charge collection. To meet its on-orbit application requirements, this paper presents the design and development of an electronics prototype capable of simultaneously operating three independent detection units. The prototype employs a three-board architecture: a Front-End Board responsible for analog signal acquisition and analog-to-digital conversion; a Main Control Board serving as the system control hub for data processing and communication with the satellite platform; and a High-Voltage Board that supplies precise and stable operational bias to the detector. In the firmware design, we have optimized the readout logic of the Topmetal-L chip, effectively reducing the transmission of invalid pixel data and enhancing system efficiency.
Test and verification results demonstrate that: the total power consumption of the electronics prototype system is 16.1 W, complying with platform constraints; thermal simulation analysis indicates a local maximum on-orbit operating temperature of approximately 44°C, significantly lower than that of current similar gas detectors successfully operating in orbit; functional experiments verify that the system can stably drive the three detection units, thereby providing an effective detection area of 11.8 cm² per prototype unit and the capability to capture and logically read out photoelectron events with fluxes up to 450 count/cm²/s .Speaker: Mr Shi Qiang Zhou (Central China Normal University) -
15:25
Development of a prototype front-end readout ASIC for gas pixel detectors 20m
Gas pixel detectors (GPDs) are widely used in astronomical X-ray polarimetry for photoelectron event reconstruction. Future GPD applications impose increasing demands on front-end readout ASICs in terms of throughput and pixel-level performance. This work presents a prototype front-end readout ASIC developed for next-generation GPD systems. The chip integrates a $32 \times 30$ pixel array with a pixel pitch of $45\,\mathrm{\mu m} \times 45\,\mathrm{\mu m}$. Each pixel incorporates a charge-sensitive analog front-end, while a localized and noise-robust triggering scheme based on a $2 \times 2$ pixel block architecture enables block-level time-of-arrival (ToA) measurement and self-triggered readout. A configurable readout architecture is adopted, supporting both trigger-based region-of-interest (ROI) readout and triggerless full-frame (FF) readout. Measurement results show that the pixel front-end achieves a charge-to-voltage conversion gain of $69\,\mathrm{\mu V/e^-}$, an equivalent noise charge (ENC) of approximately $50\,\mathrm{e^-}$, and an input dynamic range of $26\,\mathrm{k\,e^-}$ with about 3% nonlinearity. The block-level ToA measurement provides a time resolution of $25\,\mathrm{ns}$, a timing precision of $16.5\,\mathrm{ns}$, and a dynamic range of $25.6\,\mathrm{\mu s}$.
Speaker: Zhuo Zhou (Central China Normal University) -
15:25
Development of the SAMIDARE Board: A New Acquisition System for TPCs 20m
In the field of accelerator-based nuclear physics, significant progress has been made in increasing beam intensities. Consequently, acquisition systems must keep pace with the rising rate of events and the number of channels of increasingly complex experimental setups. Time projection chambers usually feature thousands of channels and in exotic nuclei facilities are often used in an active target configuration to allow inverse kinematic reactions on thicker targets. Beyond the high number of channels, such a configuration adds to the DAQ complexity the problem of distinguishing between the beam-induced background and the recoil signal.
Following the obsolescence of the de-facto standard General Electronics for TPCs (GET), the SPADI Alliance started the development of a new acquisition board for TPC signals: the SAMIDARE Board. Although designed for potential future streaming readout purposes, the current first application aims to reduce data throughput via on-device FPGA filtering. In this contribution, an overview of the device and the prototype firmware will be presented.
Speaker: Claudio Santonastaso (RIKEN Nishina Center) -
15:25
Development of UDP based data-streamer and slow-control protocol for streaming readout FEE in SPADI alliance 20m
The Signal Processing and Data Acquisition Infrastructure (SPADI) alliance in Japan promotes the development of streaming readout DAQ systems utilizing Ethernet as the communication protocol for front-end electronics (FEE). While the SiTCP, which is widely used in Japan, provides reliable TCP-based data transfer and UDP-based slow control, its reliance on TCP imposes CPU overhead on host PCs. In addition, SiTCP lacks support for simultaneous multi-host slow control access and dynamic IP configuration.
To address these limitations, we are developing a new Ethernet communication protocol for FEE enabling high-throughput, low-overhead UDP-based data streaming and flexible slow control. The adoption of UDP eliminates the need for the TX buffer used for retransmissions in TCP, improving the memory usage of FEE. Additionally, it frees FEE from the backpressure imposed by PCs during congestion.
The proposed protocol is based on the MAC, IPv4, and UDP layers, which have been developed within the IDROGEN project in France. On top of these layers, we newly developed a data streamer and a slow control protocol. The data streamer efficiently encapsulates any format of DAQ data in UDP packets and supports multiple independent UDP streams as well as bidirectional communication. The slow control block provides register access through a simplified internal bus with an AXI4-Lite interface. This block has a packet buffer allowing access from multiple hosts.
The protocol is being implemented for 1 GbE and 10 GbE and will be released as open-source software. This presentation reports on the protocol design, throughput performance, and packet loss characteristics.
Speaker: Ryotaro Honda (KEK IPNS) -
15:25
Extension of a wired time-synchronization protocol for sub-nanosecond accuracy to multiple FPGA families 20m
In Japan, a community of users in experimental physics has been actively developing and deploying a generic streaming DAQ system. This system requires precise time synchronization among FPGAs in the front-end electronics and employs an original time-synchronization protocol implemented on FPGAs using general I/O pin pairs. Previously, the protocol was available only on AMD Kintex-7 FPGAs, which significantly limited the generality of the system. To address this limitation, we investigated differences between FPGA families and introduced two extensions.
First, the protocol was extended to operate on both the AMD 7-series and UltraScale FPGA families. The protocol utilizes the IDELAY and IOSERDES primitives; however, because different generations of these primitives are employed in the two families, their functionality and control schemes differ. To address this issue, we modified the surrounding circuitry of the primitives in the UltraScale family to achieve behavior equivalent to that of the AMD 7-series.
Second, differences in input signal delays introduced by the ISERDES and IDELAY primitives between two FPGAs were addressed, since accurately determining these differences is essential for precise time synchronization. Although these delay differences include device-specific components, they were previously neglected because communication was limited to identical devices. By explicitly determining the device-specific delays, the protocol was updated to enable time synchronization between different devices.
The updated time-synchronization system was implemented on various FPGA devices, and an accuracy within approximately 300 ps was achieved across all device combinations, confirming that the performance attained prior to the update was fully preserved.
Speaker: Mr Eitaro Hamada (KEK IPNS) -
15:25
FPGA-Based Front-End Electronics for AC-LGAD Detector 20m
The Low Gain Avalanche Detector (LGAD) is a high-precision silicon-based timing detector known for its excellent signal-to-noise ratio and a typical gain of 10–50. The AC-coupled Low Gain Avalanche Detector (AC-LGAD), evolved from the LGAD, represents the latest generation of high-precision 4D detectors capable of simultaneously measuring both the time and position of particles.
We proposed a 16-channel front-end electronics readout system (FEE) for AC-LGADs. For analog part, it consisted of a radio-frequency preamplifier and a high-speed discriminator in each channel. All discriminated signals were fed to an FPGA. In the FPGA, 16 channels of Time-to-Digital Converters (TDC) were implemented, based on the tapped delay line (TDL) structure. Test results showed that ~6ps timing resolution for the TDC could be achieved.
The FEE system currently provides time-of-arrival (TOA) measurement. High-precision time-over-threshold (TOT) module is under development. With a pulse generator, the intrinsic timing resolution for the FEE is measured. The timing resolution for one channel in the FEE is about 10 ps. The timing resolution for the AC-LGAD detectors with our customized FEE can reach up ~40 ps. The resolution can be further improved after the TOT calibration in the future.Speaker: Ms Liyan Jin (Shandong University) -
15:25
FPGA-Based Multi-Channel Real-Time Readout System for the STCF Muon Detector Prototype 20m
The Super Tau-Charm Facility (STCF) is a proposed next-generation high-luminosity e+e- collider in China, operating at 2–7 GeV with a peak luminosity of about 0.5 × 10³⁵ cm⁻² s⁻¹. Its muon detector (MUD), located at the outermost layer of the spectrometer, must provide high muon detection efficiency while suppressing charged-hadron backgrounds. Since the hit position is encoded by the channel number, the readout electronics mainly need to record the arrival time of signals above a particle-identification threshold. For the prototype of the muon detector, we have therefore developed a multi-channel real-time readout system for precise timing of over-threshold signals. The hardware consists of a front-end amplification and discrimination(FEAD) board and an FPGA-based readout board. The FEAD converts detector signals into digital pulses, which are sent via USL connectors to the Field-Programmable Gate Array(FPGA). The FPGA measures the high-precision arrival time and uses a multi-phase pulse-width measurement to correct leading-edge time walk, while data are transferred to a PC (personal computer) through a Gigabit Ethernet link. Preliminary test results with a signal generator demonstrate synchronous readout of 32 channels with a time resolution better than 35 ps per channel and show an excellent linear response in pulse-width measurement. An upgraded FPGA board is under development, and further results will be presented in the meeting.
Speakers: Ms Qian He (Shandong University), Mr Hao Dong (Qufu Normal University), Mr Bo Wang (Shandong University), Ms Xiaohan Sun (Shandong University), Feng LI (USTC), Kun Hu (Shandong University (CN)), Ms Yuying Li (Shandong University) -
15:25
GAROP3: A Gated Readout ASIC for the Proton Beam Monitor of the COMET Experiment 20m
The COMET (Coherent Muon to Electron Transition) experiment at J-PARC searches for neutrino-less $\mu-e$ conversion, requiring a SiC sensor in the beam pipe as a proton monitor to serve as a veto between proton pulses.
The GAROP (GAted-ReadOut Proton) ASIC has been developed as the dedicated front-end readout for the SiC sensor that can be gated off with a repetition period of $1.2\ \mu s$ to withstand the extreme radiation of J-PARC proton beam.
Fabricated in 65nm CMOS process, the chip features eight channels that are grouped into two flavours differing in shaper topology for performance comparison.
Each channel comprises a CSA (Charge Sensitive Amplifier), a CR-RC shaper, and a comparator.
To prevent saturation during the main proton pulse, the circuit employs a gating topology with three switches: one for CSA bypassing ($SW_{CSA}$), one for shaper input isolation ($SW_{CRRC}$), and one for shaper baseline holding ($SW_{BSLN}$) for saturation prevention and fast recovery.
The recently-submitted version, namely GAROP3, features decoupled gating inputs for the three switches, which allows for the independent adjustment of delay times and switching sequences to achieve the optimal gating setup with the minimum waveform distortion.
Additionally, each channel is equipped with an auto-tuning threshold circuit to compensate for process-induced baseline variations by adjusting the threshold according to the results of the comparator.
The GAROP3 has been submitted for fabrication. Validation results demonstrating the chip's functionality to handle the input signal with or without gating, as well as threshold tuning, will be presented.Speaker: Mr Xiang-Yu Xu (KEK) -
15:25
Long-Time Coherent Integration for High Precision Weak Signal LiDAR on Lunar Rover 20m
Light Detection and Ranging (LiDAR) is a key enabling technology for autonomous navigation and obstacle avoidance in lunar rovers, where Time-of-Flight (ToF) LiDAR provides centimeter-level ranging accuracy by measuring the temporal delay between emitted laser pulses and their reflected echoes. However, the lunar environment poses severe challenges, including low surface reflectivity, intense solar background radiation due to the absence of an atmosphere, and strict constraints on power consumption and thermal dissipation, resulting in very low echo signal-to-noise ratios (SNR). To address these challenges, this work presents a weak-signal LiDAR system that employs an avalanche photodiode (APD) for echo detection and an FPGA-based backend implementing long-time coherent integration (LTCI) for signal processing. The proposed method achieves centimeter-level ranging accuracy even when the echo SNR is below unity. In addition, by dynamically adjusting the APD bias voltage, the system maintains a wide dynamic range for strong return signals, achieving an overall dynamic range of up to 60 dB.
A prototype system has been realized, integrating a mode-locked laser, an avalanche photodiode (APD) detector, a 1 GSa/s analog-to-digital converter (ADC), and an FPGA-based digital backend. All signal processing is executed on the FPGA, facilitating precise timing extraction and achieving centimeter-level ranging accuracy. The proposed architecture provides a robust and energy-efficient framework for high-precision LiDAR operation specifically designed for lunar rovers, addressing the extreme low-SNR conditions of the lunar surface environment.
Speaker: Wenhao Duan (the Department of Modern Physics, University of Science and Technology of China, Hefei 230026, China) -
15:25
Millimeter-Wave-Based Wireless Transmission System for CEPC Detector 20m
To meet the Circular Electron-Positron Collider (CEPC) Detector’s demand for low-material-budget and high-reliability data transmission, this study proposes a 60 GHz millimeter-wave (mm-wave) transmission system. The core transceiver chips were verified radiation-tolerant: they maintained stable communication under 7 Mrad X-ray irradiation (5.5 h) and 1.2×1012 neq/cm² neutron irradiation (21 h), satisfying CEPC’s harsh radiation requirements. Optimized with low-noise amplifiers and patch antennas (replacing horn antennas), the system achieved a maximum transmission rate of 6.25 Gbps at 67.5 cm, with negligible crosstalk with the Taichu3 Vertex Pixel prototype chip. A multi-channel system integrated via flexible PCBs realized error-free transmission (BER < 1e-14) at up to 18.75 Gbps over 30 cm, offering a scalable, low-cable solution for CEPC and other high-energy physics experiments.
Speaker: Dr Jun Hu (Institute of High Energy Physics, Chinese Academy of Sciences) -
15:25
Precision analysis and development of timing measurement ASICs for strip LGAD readout 20m
For the proposed Circular Electron Positron Collider (CEPC) research and development program, the Outer Tracker (OTK) plans to utilize AC-coupled Low-Gain Avalanche Detector (AC-LGAD) microstrip sensors to achieve high-precision spatial measurements (10 µm) and timing measurements (50 ps). The initial prototype of the strip LGAD readout ASIC, named LATRIC0, is a single-channel design that includes a front-end amplifier and has been fabricated using 55-nm CMOS technology. Measurement results indicate that the prototype achieves bin sizes of approximately 30 ps, a time precision of less than 1 Least Significant Bit (LSB), and a power consumption of less than 6.3 mW per channel at a 1-MHz event rate, thereby preliminarily meeting the detector requirements. The second iteration, LATRIC8, features eight channels with a pitch of 100 μm and incorporates optimization to the TDC core to address issues identified during LATRIC0 testing. Testing for LATRIC8 is expected to begin in early March. This report will focus on precision measurements, analysis, and optimization of LATRIC0, as well as preliminary tests on LATRIC8. The analysis, design optimizations, and measurement results will be presented.
Speakers: Chuanye Wang (Nanjing University), Xiaoting Li (IHEP) -
15:25
Progress in Readout Electronics for Scintillator-Based Multi-Neutron Detector Array 20m
The study of the structure and reactions of neutronrich unstable nuclei using radioactive ion beams (RIBs), and the exploration of exotic nuclear phenomena near the limits of stability, represent a frontier topic in contemporary nuclear physics. This highlights the significant importance of investigating multineutron systems and their correlations. Multi-neutron detection devices, which offer high resolution and high neutron detection efficiency, have been widely deployed in major nuclear science facilities around the world as essential technological means for conducting this research. A scalable readout electronic scheme for a plastic-scintillator-based multi-neutron detector array has been designed and developed, featuring high time resolution, high event-rate capability and high integration. To realize high granularity detection, silicon photomultipliers (SiPMs) are employed as photon sensors, together with dedicated readout electronics. In order to achieve a highly integrated readout for massive SiPM channels on the plastic scintillator array, a system architecture based on Application-Specific Integrated Circuit (ASIC) chips is developed to read out and aggregate from massive channels. Functional verification and system integration tests have been completed. The readout electronic scheme adopts a highly integrated design integrated with the detector, while the front-end electronics are implemented with a separated architecture. Based on the readout system, a small-scale plastic scintillator array is constructed and tested, achieving a timing resolution of 110 ps and a spatial resolution of 1.2 cm.
Speaker: Chengjun Zhang (University of Science and Technology of China) -
15:25
Readout Electronics System for the CsI(Tl) Calorimeter of the VLAST-P Payload 20m
The VLAST-P (Very Large Area gamma-ray Space Telescope-Pathfinder) is a miniaturized space detector designed to detect high-energy gamma-rays from the Sun (in the MeV to GeV energy range), as well as high-energy protons from solar flares. The CsI(Tl) calorimeter is a crucial part of the VLAST-P payload for measuring the energy of incoming high-energy photons and providing the hit information from cosmic rays within the calorimeter for the trigger selection of effective events. An electronics system consisting of 4 FEEs (Front End Electronics) modules and 1 PAM (Pre-Amplifier Module) with a total power consumption of about 22 W, has been developed. Signals from the Avalanche photodiodes (APD) are amplified and shaped by CSA (Charge Sensitive Amplifier), then split into high and low electronics gains to achieve a large dynamic range. The FEEs digitize analog signals from a total 100 channels, transmits the scientific data to the payload electronics control unit, and provides hit signal translation. To assure long-term reliability in the harsh space environment, radiation hardness, thermal design, components and board level quality control were carefully considered. In addition, the key indexes of energy linearity, noise level, and dynamic range were preliminarily studied. Test results showed that the system level ENC (Equivalent Noise Charge) for each high-gain channel is below 4 fC, and the linear measurement upper limit of each low-gain channel reaches about 100 pC, both of which meet the physical requirements of the detector.
Speaker: Qian Chen (University of Science and Technology of China (CN)) -
15:25
Tapped Delay Lines Time-to-digital converter design and performance in Versal architecture 20m
Time-to-digital converters (TDCs) are a critical part of the acquisition chain to produce precise time measurements for radiation detection in medical imaging and high-energy physics. Research on TDC design in FPGA is often focused on digital means to improve resolution, precision and nonlinearity, such as using wave union or multiple delay lines. Less scrutiny is given to making sure that the tapped delay line (TDL) circuit itself is optimized through careful placement and routing. This study provides a model linking FPGA design parameters, namely, the delay through the TDL and clock skew to precision and nonlinearity. From this model, design principles are identified and used to produce four different circuit optimizations for the TDL. The effects of these optimizations on bin width and precision were studied both in simulation and measured experimentally on an UltraScale+ FPGA. These optimizations remove all ultrawide bins from the TDL. This led to an improvement of the RMS precision of a two-channel TDC from 4.7 ps to 2.5 ps. The produced model and simulator facilitate design choices and accelerate the optimization and validation of TDL-based TDCs.
Speaker: Julien Rossignol (University of California, Davis)
-
14:45
-
15:45
→
16:45
Data Acquisition and Trigger Architectures Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
Convener: Martin Grossmann PSI (Paul Scherrer Institut)-
15:45
Picosecond-level Clock Distribution System of the LHCb Experiment at CERN 20m
The LHCb experiment at CERN is undergoing R&D studies to prepare for operation in Run-5 at roughly five times the Run-3 (2022–2026) instantaneous luminosity. Several subdetectors will introduce precision timestamping, imposing stringent requirements on the distribution of the LHC synchronous clock to Front-End (FE) electronics. Specifically, the end-to-end clock phase uncertainty must be reduced by an order of magnitude relative to the current Run-3 system, improving from approximately 250 ps peak-to-peak to <= 50 ps in Run-4 (2030-2033) to <=10 ps in Run-5 (2036-2041).
This work presents a detailed description of the novel test suites needed to perform detailed characterizations of the timing distribution chain, enhancing automation, reproducibility and sample size. The current limitations have been analysed thoroughly and the results enabled the design of a novel and more robust clock-tree architecture that mitigates these effects. The proposed design achieves << 30 ps peak-to-peak clock phase uncertainty, validated through systematic measurements across all possible operational conditions, including reconfiguration, reprogramming and recompilations. The results demonstrate a viable path to meet future LHCb’s picosecond-level timing requirements for Run 4, while enabling high-precision timestamping in the upgraded detectors in view of a target of <=10ps precision for Run5.
Speaker: Alberto Perro (CERN) -
16:05
A Machine Learning-Based Real-time Anomaly Detection System 20m
In modern high-energy physics experiments, effective data quality monitoring is essential. Histograms serve as a primary tool, ensuring experimental efficiency and data integrity. However, the sheer volume of histograms produced in large-scale experiments makes manual monitoring impractical. Although traditional statistical methods handle large datasets, they often struggle with high false-negative rates and lack adaptability to complex patterns. To address this, this work presents a machine learning framework for real-time anomaly detection, aimed at enhancing the automation, accuracy and scalability of monitoring extensive histogram data. Our method utilizes an autoencoder to learn normal patterns from histogram data, identifying anomalies as instances with high reconstruction errors. We established a complete, end-to-end pipeline spanning feature extraction, model training, evaluation, and online deployment, delivering a highly automated and scalable solution. The framework was developed and systematically validated using a dataset of approximately 200,000 SPMT histogram samples from the Jiangmen Underground Neutrino Observatory (JUNO). Experimental results demonstrate that our approach can effectively monitor tens of thousands of histograms simultaneously, reducing processing latency to mere seconds while significantly improving detection precision and recall compared to conventional methods. The successful deployment of this system in a live experimental environment proves its engineering utility and potential for broader adoption, presenting a new paradigm for intelligent experiment operations. This talk will detail the system's design, implementation, and application.
Speaker: Shuihan Zhang (Institute of High Energy Physics, CAS) -
16:25
The Final Run of the sPHENIX Experiment 20m
By the time of this conference, the sPHENIX experiment at the Relativistic Heavy Ion Collider (RHIC), and the RHIC program itself, will have concluded operations. RHIC is scheduled to shut down to begin construction of the Electron-Ion Collider, marking the end of 25 years of heavy-ion physics at the facility.
sPHENIX was designed as a high-rate detector optimized for precision measurements of jets, heavy flavor, and quarkonia in Relativistic Heavy-Ion collisions. Following a successful commissioning phase, the experiment completed its full physics runs, collecting high-quality data in both proton-proton and gold-gold collision systems.
This contribution summarizes the final operational status of the detector, highlights key performance metrics achieved during data taking, and outlines the scope of the recorded dataset. We also briefly discuss ongoing analysis efforts and the anticipated physics legacy of the sPHENIX program.
Speaker: Martin Lothar Purschke (Brookhaven National Laboratory (US))
-
15:45
-
16:45
→
17:15
Coffee break 30m
-
17:15
→
19:00
Data Acquisition and Trigger Architectures Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
Convener: Marc-André Tétrault-
17:15
GPU-Based Level-3 Real-Time Trigger for the Belle II High Level Trigger System 20m
The Belle II experiment at KEK is a next-generation B-factory designed to operate at unprecedented luminosity. During recent SuperKEKB operations, unexpectedly high background has led to a significant increase in the Level-1 trigger rate. If this situation persists as luminosity continues to improve, the processing capacity of the Belle II High Level Trigger (HLT) system may exceed its original design limits.
The current HLT is implemented as a large-scale PC farm equipped with ~7,200 Intel Xeon CPU cores. While this system provides sufficient processing power for physics events at the design luminosity, further expansion is not cost effective. In contrast, modern GPUs offer a substantially larger number of processing cores at a much lower cost per performance, making them attractive for real-time event rejection.
To address this challenge, we propose the introduction of a GPU-based processing stage in front of the existing HLT, implementing a software-based Level-3 trigger. This Level-3 trigger performs fast particle tracking and energy clustering using a limited detector set. Background events are efficiently identified by reconstructing the event origin and rejecting events inconsistent with the nominal beam collision region.
A uniform software development environment compatible with the Belle II framework is realized by integrating CUDA compilation into the existing SCons-based build system. GPU binaries are directly linked with the Belle II framework (basf2), enabling integration with existing C++ codes. The GPU system is implemented in the HLT input servers using NVIDIA RTX 5000 Ada GPUs. System design, data-flow architecture, and performance in beam tests are presented.
Speaker: Prof. Ryosuke Itoh (KEK) -
17:35
Real-time DAQ System for Muon-Spin Spectroscopy 20m
Conventional muon-spin rotation, relaxation, and resonance ($\mu$SR) experiments rely on trigger-based data acquisition systems, in which the arrival of an individual muon initiates a fixed readout window. At continuous muon sources, this approach fundamentally limits the achievable muon rate. To overcome these constraints, we present the design and implementation of a fully triggerless data acquisition (DAQ) system tailored for high-rate $\mu$SR applications.
The proposed DAQ operates in a continuous readout mode, recording all detector hits from silicon pixel sensors and fast scintillators without the use of a hardware trigger. Event building is performed retrospectively by correlating the timing and spatial information of muon and positron tracks, enabling the reconstruction of decay pairs entirely in software or firmware.
The system is based on FPGA front-end boards originally developed for the high-rate particle physics experiment Mu3e, supporting zero-suppressed, time-unsorted data streams at 1.25 Gbit/s per detector chip. Multiple front-end boards are synchronized and aggregated using an Arria~10 FPGA board with high-speed optical links and PCIe readout to a host PC. This architecture supports sustained data rates corresponding to muon intensities of up to $10^8$ $\mu$/s.
We discuss the DAQ concept, synchronization strategy, data flow, and scalability, and present results from test beam measurements conducted at the Paul Scherrer Institute using silicon pixel and fast scintillating detectors. The system establishes a flexible and extensible DAQ framework for next-generation $\mu$SR experiments at continuous beam facilities.
Speaker: Marius Snella Köppel (ETH Zurich (CH)) -
17:55
Data reduction in CMS 20m
Bandwidth and storage limitations are a major bottleneck for many physics measurements and searches in HEP. The CMS experiment has addressed this constraint with several techniques that increase the number of events saved to disk while preserving physics performance; these approaches are still evolving and will be improved for Phase-2.
This talk will give an overview of the status of the compression techniques used in CMS with a focus on data scouting (reduced-content, trigger-level formats) and RAW Prime (strip cluster saved instead of strip raw data), and their impact on physics analyses. We will summarize how these methods enable higher effective trigger rates, lower thresholds, and larger datasets — particularly for low-mass and soft-object signatures — and outline ongoing developments for Run 3 and the HL-LHC.
Speaker: Silvio Donato (Universita & INFN Pisa (IT)) -
18:15
JUNO DAQ: Operational Status and Recent Advances 20m
The Jiangmen Underground Neutrino Observatory (JUNO) is a multi-purpose underground neutrino experiment located in southern China that has recently become operational. JUNO is equipped with 17,612 large 20-inch photomultipliers (LPMTs) in its Central Detector, designed to detect photons using a high-speed, high-resolution waveform digitization technique. To enhance detection capabilities, 25,600 small 3-inch PMTs (SPMTs) are strategically placed in the gaps of the LPMT array. Additionally, 2,400 LPMTs are employed in the surrounding Water Cherenkov detector to identify cosmic ray muons and reduce associated backgrounds. The JUNO Data Acquisition (DAQ) system is designed to handle a data influx of approximately 40 GB/s from all sub-detectors. It processes these streams in real-time, performing online data assembly and event classification to compress throughput to under 500 Mbps, facilitating efficient network transmission and storage.
This presentation will outline the architectural design and technical implementation of the JUNO DAQ system, with a focus on recent advancements during the transition to the operational phase. Noteworthy upgrades include a comprehensive enhancement of the readout system to improve stability and mitigate the effects of short-term high-rate events. Additionally, improvements in dataflow availability address single points of failure, ensuring data integrity and continuous operation. These enhancements have significantly increased system stability and reliability. Insights into these developments and our operational experiences will also be shared.
Speaker: Xiaolu Ji -
18:35
The phase-1 upgrade of the ATLAS level-1 calorimeter trigger 20m
The ATLAS level-1 calorimeter trigger is a custom-built hardware system that identifies events containing calorimeter-based physics objects, including electrons, photons, taus, jets, and total and missing transverse energy. In Run 3, L1Calo has been upgraded to process higher granularity input data. The new trigger comprises several FPGA-based feature extractor modules, which process the new digital information from the calorimeters and execute more sophisticated trigger algorithms. The design of the system will be presented along with an analysis of the improved performance for identifying interesting proton-proton collisions in the increasingly challenging Run-3 LHC pile-up environment, as well as in heavy ion collisions where timing and noise effects are particularly challenging.
Speaker: Niklas Schmitt (Johannes Gutenberg Universitaet Mainz (DE))
-
17:15
-
19:00
→
20:00
Welcome reception 1h
-
08:30
→
09:00
-
-
08:30
→
09:10
Invited talk Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
-
08:30
AI ML at LHC 40m
-
08:30
-
09:10
→
10:10
Front-End Electronics, Fast Digitizers, Fast Transfer Links & Networks Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
-
09:10
The front end electronics of the Hyper Kamiokande far detector 20m
The Hyper-Kamiokande (Hyper-K) experiment is a next generation underground water Cherenkov detector designed to search for leptonic CP-violation and a wide science program, including neutrino astrophysics, and searches for nucleon decay.
The experiment will use around 20000 PMTs of 20" to observe the inside of the HK tank. The collaboration chose a system based on discrete components and proposed by INFN as the electronics for the experiment.
The board has 12 channels and 2 different boards will go in a vessel so that a single vessel can be connected to 24 total PMTs.
A single channel is connected to two main blocks: the charge measurement circuit and the timing measurement circuit. The board is equipped with a Xilinx Kintex 7 FPGA which has the task of generating the integrator hold, the conversion signals for the ADCs and to measure the time with a TDC. The FPGA will also be responsible for the data collection from all the channels and the communication with the Digital-Processing-Board (DPB) that will connect the vessel with the DAQ of the experiment.Speaker: Jacopo Pinzino (Universita & INFN Pisa (IT)) -
09:30
Early Statistical Estimation for Data Rate Optimization in Dark-Count-Limited Detectors Using Fully-Analog Processing 20m
The use of large size detectors and the search of a better topological and energy resolutions implies an increasing data rate. Consequently, information reduction strategies are a need that cannot be overlooked. However, it is important to avoid deadtime in the acquisition process. Most of the light sensors used in time projection chambers (TPC), such as NEXT experiment, are silicon photomultipliers (SiPMs) which introduce dark count events. In time window integrator systems, a common challenge occurs when these dark count rates are similar to the actual signal rate. Using an amplitude threshold, under previous conditions, a large part of the true information could be lost. Alternative solutions require single photoelectron (PE) recognition, thus a pile-up reduction mechanism is needed.
A statistical estimator is presented to overcome previous issues. A fully-analog implementation has been designed, which avoids acquisition deadtime. A fast shaper module is used to reduce pile-up effects. Then, a photon counting module works as a PE threshold, computing time over threshold (TOT) of the photon signal, which is proportional to the number of photons. Simultaneously, a mean PE arrival time measurement is computed using N last PE. With both modules, a count of the PE inside a time window is computed while some of the discarded windows that could contain true signal are recovered using Mean Arrival Time Module. This is feasible since last module works across time window boundaries. The whole system allows to take a real time decision without storing further information, thus reducing output data rate.
Speaker: Jara García Barrena (Universitat Politècnica de València (UPV)) -
09:50
Multi-phase processing Framework for Real-Time DAQ at Multi-GSPS Sampling Rates in Sci-Compiler 20m
The continuous increase in sampling rates of modern waveform digitizers, reaching and exceeding the multi-GSPS regime, poses significant challenges to real-time FPGA-based signal processing. New Platforms fast digitizers enable unprecedented bandwidth and timing resolution but require processing architectures capable of sustaining deterministic operation at effective data rates of several giga-samples per second.
To address these challenges, Sci-Compiler has been extended with native support for time-multiplexed (TM) processing architectures, allowing ultra-fast serial data streams to be transformed into parallel, phase-interleaved buses operating at lower clock frequencies. In this approach, a single high-rate ADC stream is decomposed into multiple synchronous lanes, enabling complex real-time algorithms to be executed within the timing and resource constraints of modern FPGAs. This paradigm requires a complete redesign of classical nuclear signal-processing blocks, including baseline restorers, trapezoidal and pole-zero digital shapers, constant fraction discriminators, peak detectors, and trigger logic, to preserve numerical accuracy and temporal determinism across interleaved samples.
Sci-Compiler abstracts the complexity of TM architectures through its graphical design environment and modular IP library, allowing users to deploy advanced real-time processing on digitizers operating from 500 MSPS up to 5 GSPS without direct FPGA coding. The framework automatically manages lane alignment, pipeline balancing, and inter-phase data dependencies, enabling scalable and portable designs across heterogeneous hardware platforms. This contribution presents the architectural principles of Sci-Compiler’s TM engine, describes the implementation of key signal-processing IPs, and demonstrates their application to high-rate nuclear spectroscopy and fast-timing measurements, highlighting how time-multiplexed FPGA processing enables next-generation real-time DAQ systems.Speaker: abba andrea (Nuclear Instruments)
-
09:10
-
10:10
→
10:40
Coffee break 30m
-
10:40
→
11:25
Mini Orals Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
-
10:40
21 MicroTCA-based low latency data streaming and processing architecture using UDP offload and Time Sensitive Networking 20m
As experimental fusion devices transition from shot-based operation to longer pulse duration, diagnostic and control systems need to evolve to support this kind of operation. This work proposes a distributed data acquisition architecture that aims to provide ultra-low latency communications based on full hardware User Datagram Protocol (UDP) offloading, leveraging the emerging IEEE Time Sensitive Networking (TSN) technologies for deterministic latency and synchronization, and providing an edge computing platform that allows the implementation of Digital Signal Processing (DSP) algorithms and Artificial Intelligence (AI) real-time applications.
The solution proposes the use of the Micro Telecommunications Computing Architecture (MTCA) standard as the foundation and is entirely built from Commercial Off-The-Shelf (COTS) components. The core device is an Advanced Mezzanine Card (AMC) form-factor board based on an AMD Zynq UltraScale+ MPSoC, which enables heterogeneous processing through its integrated ARM processing system and programmable logic. For connectivity, an FPGA Mezzanine Card (FMC) module with dual Small Form-factor Pluggable (SFP+) cages uses the device high-speed transceivers to establish a 10 Gigabit Ethernet (10GbE) fiber optic link, on top of which a UDP offload engine is used to ensure efficient data streaming to external systems.
Speaker: ALEJANDRO PIÑAS-HIGUERUELA (Universidad Politecnica de Madrid) -
11:00
35 Development of the IDROGEN White Rabbit system for SuperKEKB and future projects at KEK 20m
The clock synchronization among distant FPGA circuits based on the White Rabbit (WR) network is a new technology that will be the standard in future accelerator projects. We plan to employ the IDROGEN carrier board in the SuperKEKB and future projects at KEK. This board was developed at CNRS/IN2P3, IJCLab. Several feasibility studies were carried out jointly by IJCLab and KEK. The IDROGEN-WR combined system can provide synchronized RF signals at individual slave nodes. Its rms noise level is sub-picosecond order. We developed a precise synchronization scheme between the IDROGEN-WR setup output and the RF clock employed in the SuperKEKB operating Low-Level RF system. Besides, the long-term stability of its relative phase was confirmed. The results indicate that SuperKEKB can increase the number of RF signal branches by implementing the IDROGEN carrier board for several synchronization purposes. Also, we plan to develop the distributed DAQ system as another application of the IDROGEN-WR combined system. The sampling clock of the ADC circuit on individual slave nodes is synchronized. We consider this approach as a unified, standalone DAQ system that covers the sensors installed in the large-scale accelerator beamline. Then, the causal relation among the simultaneous events in the accelerator operation is determined. The latest results and prospects of our R&D activities are reported.
Speaker: Hiroshi Kaji -
11:05
103 Development of a Coincidence Measurement System for Scattered and Decay Particles Using a Streaming DAQ and Digitizers 20m
In experiments using the ultra-high-resolution magnetic spectrometer Grand Raiden at the Research Center for Nuclear Physics (RCNP), the University of Osaka, several projects are currently underway to integrate digitizers for waveform acquisition from silicon detectors. Scattered particles are detected by the focal-plane detectors of Grand Raiden to determine the excitation energy, and decay particles from excited nuclei are measured by silicon detectors, and those signal waveforms are acquired using digitizers.
Therefore, it is necessary to develop a method for coincidence measurements of scattered particles and decay particles using both the focal-plane detectors and silicon detectors in experiments with Grand Raiden at RCNP.
Establishing this measurement technique will expand the experimental scope of Grand Raiden at RCNP and provide an essential experimental foundation.The data acquisition (DAQ) system has recently undergone a significant transition. While the system for the focal-plane detectors has been upgraded to a triggerless streaming DAQ to ensure dead-time-free data collection, waveform acquisition for the silicon detectors still relies on a conventional trigger-based DAQ.
To synchronize these two distinct architectures, an analog trigger is generated from the focal-plane detector signals and fed into the silicon detector DAQ. In addition, the "accepted" signals from the silicon DAQ are fed back into the streaming DAQ to serve as reference signals.
The system matches the precise timing information of these reference signals and integrates the data from both systems to reconstruct individual physical events.
This poster presents the development of this hybrid integration method and provides an evaluation of its performance.Speaker: Shotaro Maesato (The University of Osaka) -
11:05
106 Timing and Slow Control backend for CMS Drift Tube On Board electronics 20m
Commissioning of the detectors for the High‑Luminosity Large Hadron Collider (HL‑LHC), also referred to as the Phase‑2 upgrade, is planned for the period 2026–2028 at CERN. In this framework, the readout and control electronics of the Drift Tube (DT) subdetector of the CMS experiment have undergone a complete redesign. This upgrade is being carried out to cope with the expected increase in event rates and to ensure compatibility with the upcoming Trigger and Timing Control Distribution System (TCDS2). In parallel with this, a new back‑end system for timing distribution and slow control has been developed. This system is based on a set of FPGA‑based Advanced Telecommunications Computing Architecture (ATCA) boards onto which custom firmware has been deployed. Each board is connected to the server through a network interface and provides approximately one hundred high‑speed optical links implementing the Low Power Gigabit Transceiver (LpGBT) protocol, used to distribute machine‑synchronous timing signals and slow‑control commands to the detector electronics. The VHDL firmware integrates intellectual property (IP) cores, makes use of the SLAC Ultimate RTL Framework (SURF), and incorporates dedicated modules specifically designed to interface and manage custom on‑detector electronics. Moreover, a Python software library has been developed to configure, monitor, and control the boards through a dedicated Reliable User Datagram Protocol (RUDP) connection between the boards and the server.
This contribution will describe the overall design of the system, detail the implementation at the firmware level, and present the results of the benchtop and in‑field tests performed to date.Speaker: Antonio Bergnoli (Universita e INFN, Padova (IT)) -
11:05
116 The architecture of the Level-0 Global Event Processor at ATLAS 20m
The High-Luminosity upgrade of the Large Hadron Collider (HL-LHC) at CERN will increase the instantaneous proton-proton collision rate by a factor of 3-4 relative to the current LHC, delivering an integrated luminosity of approximately 3000-4000 fb-1 over its operational lifetime. This substantial increase in collision rate, pile-up, and event complexity requires a complete redesign of the ATLAS trigger and data acquisition (TDAQ) system.
A key component of this redesign is the Global Event Processor (GEP), which performs low-latency event selection, filtering, and routing in real time. Following the upgrade, the ATLAS detector is expected to generate data at rates approaching 128 Tb/s, necessitating aggressive real-time data reduction to meet storage and bandwidth constraints.
The upgraded TDAQ architecture employs a distributed processing model in which collision events are delivered every 25 ns to a pool of FPGA-based GEP units in a round-robin fashion. Detector data is compressed to approximately 60 Tb/s for optical transmission and decompressed upon arrival at the GEP. Data arrival latencies vary from 3.18 µs to 6.26 µs after a collision due to heterogeneous detector technologies. Within each GEP, event processing is performed by a directed acyclic graph (DAG) of algorithm processing units (APUs) operating under a strict latency constraint of 7.66 µs after the collision event.
This paper presents a novel modular architecture enabling asynchronous data arrival, buffering, synchronization, and high-throughput processing across networks of streaming APUs, suitable for implementation in single- or multi-die ASICs or FPGAs.
Speaker: Mayan Tamari (Brookhaven National Laboratory (US)) -
11:05
126 Design and Results of a Radiation Test System for SEE and TID Assessment of the VLAST-P CsI(Tl) Calorimeter Readout Electronics 20m
The Very Large Area gamma-ray Space Telescope-Pathfinder (VLAST-P) operates in a low-Earth orbit at an altitude of approximately 500 km, where exposure to space radiation poses reliability challenges for its readout electronics. Accordingly, radiation-induced single-event effects (SEE) and total ionizing dose (TID) degradation require systematic evaluation.To address this, a ground-based heavy-ion irradiation test system was developed for SEE and TID assessment of the VLAST-P calorimeter readout electronics. The FPGA-based system adopts a radiation-decoupled modular architecture, enabling real-time SEE detection and protection as well as quantitative evaluation of TID-induced performance degradation. An adaptive characterization method for SEL recovery, combined with a global timestamp mechanism, is implemented to support quantitative recovery analysis.Heavy-ion irradiation results show that no SEL events were observed in either the mixed-signal AD9266 or the analog THS4524, while 45 SEU events were recorded in the AD9266. TID testing up to 10 krad resulted in increased noise of the THS4524 from 75.6 to 94.7 mV and a DNL increase of the AD9266 from 0.25 to 0.27 LSB.
Speaker: JiaAo Zhang (University of Science and Technology of China) -
11:05
37 Electronics for Two-Dimensional Magnetic-Field Measurement in a Polarized-Light Helium Optically Pumped Magnetometer 20m
Helium optically pumped magnetometers (He-OPMs) offer high sensitivity, wide bandwidth, and room-temperature operation, making them suitable for precision magnetic-field measurements. When operated with linearly polarized optical pumping, the light propagation direction and polarization axis provide two intrinsic reference vectors, allowing orientation-related information beyond scalar magnetometry. By applying an oscillating magnetic field at the Larmor frequency, such information can be extracted from the harmonic components of the transmitted optical signal, whose amplitudes and phases reflect the magnetic field relative to the optical reference frame. In this work, dedicated FPGA-based digital signal processing electronics are designed for a linearly pumped He-OPM. The architecture enables real-time detection of the DC component as well as the fundamental and second harmonics, avoiding complex analog demodulation. Experimental results obtained under two-dimensional magnetic-field configurations are presented and show good agreement with the proposed model and analysis approach.
Speaker: Xuan Wang (University of Science and Technology of China) -
11:05
46 CODA SRO DAQ with Real-Time Data Processing Components 20m
The CODA streaming readout (SRO) data acquisition system has been enhanced with native support for the EJFAT transport protocol, enabling scalable, loss-tolerant, and high-throughput data movement tightly coupled to real-time processing. This effort introduces firmware extensions to the CODA VTP that segment crate-level aggregated data windows into UDP packets, enrich them with EJFAT load balancing and reassembly metadata, and transmit them to a remote EJFAT load balancing service for dynamic traffic distribution.
On the receiving side, a new CODA event builder component ingests multiple EJFAT UDP streams originating from distributed readout crates, performs deterministic reassembly of the original high-rate readout frames, and coherently aggregates data across the full detector. For each SRO time tick, fully assembled detector frames are delivered to a dedicated online processing pipeline optimized for low-latency data cleaning, compression, and tiered storage.
The processing pipeline is designed for extensibility, allowing additional real-time algorithms, including calibration, event reconstruction, and data quality monitoring, to be composed and executed online. This provides prompt feedback to experimental operations and enables data-driven steering during data taking. The pipeline is implemented using the ERSAP flow-based, reactive actor framework, whose distributed and asynchronous execution model integrates naturally with the CODA architecture. The upgraded CODA SRO system and its real-time processing pipelines have been commissioned on local test platforms, and deployment is underway for operation with the CLAS12 detector in full SRO mode.Speaker: David Abbott -
11:05
49 A versatile, high accuracy, pulsed measurement device 20m
Klystron modulators are key elements in free electron lasers. They provide high-voltage
pulses to bias klystron tubes with energies of several hundred joules.
Amplitude variations directly affect the gain and phase of amplified RF pulses and therefore
the accelerating fields created by RF cavities.
For machines such as the SwissFEL (Swiss Free Electron Laser), the required HV pulse
stability must be better than 15 ppm (parts per million).
Stability is calculated from 100 pulses up to 100 Hz as the relative standard deviation
of gated averages around the pulse's flat-top region, where the RF pulse is amplified.
Measuring such small variations often uses pulse offsetting technique and magnifying
the flat-top region for better quantisation resolution. This requires low-noise
analog electronics like summing amplifiers and clippers with adequate bandwidth and
settling time. Such a measurement setup usually uses an external differential
amplifier and a high-performance oscilloscope with statistical analysis to measure stability.
Our approach uses a RedPitaya STEMlab 125-14 board connected
to a custom signal conditioner board developed at PSI (Paul Scherrer Institut) to monitor stability
in klystron modulators. It captures both pulse current and voltage in real time,
and analyzes only the voltage for stability metrics. The concise design fits easily
into modulator cabinets for ongoing, precise performance checks.
Our system called the PMU (Pulse Measurement Unit) achieves a resolution limit of 5-6 ppm
at a 1 microsecond measurement gate, exploiting 67% of the ADC full-scale range.
We present the complete system and report on our initial
operational experience.Speaker: Mario Jurcevic -
11:05
51 Exploration on Quasi-Steady-State Real-Time Data Access for EAST Tokamak 20m
The EAST (Experimental Advanced Superconducting Tokamak) facility is designed to achieve high-performance steady-state operation. Its data acquisition system provides unified data acquisition and long-term data storage for various diagnostic systems, primarily supporting offline data analysis. With the advancement of long-pulse operation on EAST, researchers require access to real-time data during the long-pulse operation, for the live monitoring of key signals and even for performing simple real-time data processing. To address this, an exploration aiming at quasi-steady state real-time data transmission within the data acquisition framework, and a prototype system has been developed for tests. The prototype consists of DAQ console, DAQ unit and data service, utilizing ZeroMQ for communication. The DAQ console manages acquisition configurations and control the workflow. Each DAQ unit acquires diagnostic data in real-time and transmits it, formatted in a time-sliced format, to the data service module using a request/reply pattern. Upon receipt, the data service immediately publishes the data via a publish/subscribe pattern while simultaneously storing it for long-term archive. Clients can subscribe to specific signals to receive real-time streams for visualization or basic analysis, they can retrieve complete datasets through the offline data service. Initial feasibility has been demonstrated through a successful 1000-second test, and further evaluation with the prototype system are currently in progress.
Speaker: Ying Chen -
11:05
65 Data flow management and service of Lingshu plasma control system 20m
To achieve the complex control of high-temperature, nonlinear fusion plasmas, the overall control task is decoupled into multiple parallel sub-tasks. To host these parallel tasks, the modular Lingshu plasma control architecture is designed as shown in Figure 1. At its core, a unified data flow management system is implemented within this architecture to coordinate all parallel components, thereby enabling integrated control over the plasma discharge.
This data flow management framework centers on a Data Engine, which uniformly manages intra-component data through a key-value mapping structure , as shown in Figure 2. It integrates a Shared Memory Manager (ShmMgr) to provide efficient, zero-overhead data exchange between components. Building upon this architecture, the system features two critical data services: a Just-In-Time Configuration Distribution Service, which dynamically resolves and delivers control parameters within each sub-millisecond control cycle (its operating principle is shown in Figure 3), and a Real-time Data Archiving Service designed for long-pulse operations, which ensures persistent storage without interfering with deterministic real-time performance (its workflow is illustrated in Figure 4).
The proposed framework has been validated through extensive experimental campaigns on the EAST tokamak. Configuration resolution and data archiving introduce minimal overhead (<20µs and <12µs, respectively, as detailed in Figure 5 and Figure 6), meeting strict real-time requirements. A simulated 25-hour discharge, illustrated in Figure 7, further confirmed its steady-state operational capability.Speaker: jq zhu -
11:05
68 Low-latency digital control of an all-in-fiber phase noise reduction loop for VIRGO Squeezer 20m
The “all in fiber” phase noise cancellation loop demonstrator makes use of some pigtailed mirrors, beam splitters and photodetectors to sense phase deviations, and an acousto-optic modulator (AOM) to apply proper frequency shift to the reference beam of VIRGO detector’s squeezer. This compensates, to a certain extent, environmental factors affecting the phase of light along its path.
The setup used to employ a bench-top analog RF generator with external frequency modulation capability to drive the AOM. Such instrument has recently been replaced with a direct digital synthesizer, improving system integration and reconfigurability, with the substantial advantage of an overall lower phase noise profile of the modulated RF tone. Moreover, the settling time for frequency steering has also been reduced by almost an order of magnitude with respect to the original circuit, allowing to extend the noise reduction bandwidth to 100 kHz. An ADC bridges the analog and numerical domains, and a specific firmware implements the logic for frequency tuning on an FPGA. The loop compensation, once put in place with an analog PID module and tuned with Ziegler-Nichols method, has been transposed to the digital domain by “emulation”, using Tustin’s bilinear transform to obtain the coefficients of an IIR filter.
The contribution briefly explains the working principle of the optical setup, reports on the ancillary RF electronics, and especially discusses the control strategy and its implementation. Latencies, quantization effects and other limitations are also discussed, highlighting further improvements to be made.Speaker: Marco Toffano (Universita e INFN, Padova (IT)) -
11:05
81 FPGA-Accelerated Pattern Recognition for the ATLAS Event Filter at the HL-LHC 20m
The High-Luminosity Large Hadron Collider (HL-LHC) will deliver a five- to seven-fold increase in instantaneous luminosity relative to the original LHC design, and approximately a three-fold increase compared to Run-3 operation, significantly increasing detector readout volumes and placing substantially higher demands on the trigger and data-acquisition systems. To meet these challenges, the ATLAS Event Filter Tracking group is evaluating heterogeneous computing platforms to reduce the size, cost, and power consumption of the event-processing farm. Options where key algorithms of the processing are offloaded to either FPGA or GPU accelerator cards are compared directly to a traditional CPU-only farm. This contribution presents an FPGA-based implementation of the pattern-recognition stage in the tracking pipeline, developed using High-Level Synthesis and integrated within the ATLAS Athena software framework using the OpenCL cross-platform programming model. We describe the architecture, firmware design choices, and workflow for hardware-software co-execution. Performance studies compare physics efficiency and throughput of FPGA implementation against other technologies, demonstrating the potential of FPGA acceleration for the HL-LHC Event Filter.
Speaker: Priya Sundararajan (University of California Irvine (US)) -
11:05
82 Regional reconstruction of tracks for the ATLAS Event Filter using GPU accelerators 20m
The upcoming high-luminosity phase of the LHC (HL-LHC) presents several challenges for the ATLAS experiment's Trigger and Data Acquisition system, necessitating a full upgrade of the system. A key challenge for the Event Filter, where high-level event reconstruction and final event selection will run at 1 MHz, lies in the computational demand for online track reconstruction within the Inner Tracker in selected regions of interest at the full trigger rate and of the full tracker acceptance at 150 kHz. Over the past few years, extensive research has been conducted into utilising hardware accelerators in the ATLAS Event Filter system to improve tracking throughput and reduce full-system power consumption. Various end-to-end track reconstruction pipelines have been developed using GPUs and FPGAs. These pipelines demonstrate their capabilities by offloading different amounts of the computing load to the accelerators.
This contribution focuses on developments and optimizations for GPU-based track reconstruction in regions of interest. The scaling of throughput and latency with the size and occupancy of regions of interest has been studied for GPU-based tracking pipelines originally designed for efficiently reconstructing full tracker acceptance simultaneously. Different approaches for improving the utilization of the GPU resources for the smaller regions are presented and compared to the full acceptance algorithms as well as the CPU counterparts.
Speaker: Benjamin Michael Wynne (The University of Edinburgh (GB)) -
11:05
84 Heterogeneous Acceleration of Graph Neural Networks on Versal ACAP for High-Level Trigger Track Reconstruction 20m
The next generation of high-luminosity collider experiments, such as the HL-LHC and CEPC, will generate data streams at terabyte-per-second, imposing extreme real-time processing demands on trigger and data acquisition (TDAQ) systems. Online track reconstruction is pivotal for effective data reduction, transitioning TDAQ from simple filtering to precision selection. Graph Neural Networks (GNNs) have emerged as a powerful, data-driven solution for this task due to their natural alignment with particle track data structures. However, deploying GNNs in online systems requires stringent adherence to latency, throughput, and power constraints, which necessitates dedicated hardware acceleration beyond the limitations of CPUs and GPUs. This paper proposes and demonstrates a comprehensive hardware-software co-design approach, implementing a complete real-time GNN-based track-finding pipeline on the AMD Versal ACAP. Our methodology maps the algorithm onto the Versal VCK190's heterogeneous architecture. The graph construction phase, utilizing an ε-nn algorithm, is implemented in the PL incorporating an LUT to accelerate neighborhood searches. The core denoising GNN model is parallelized and deployed onto the AIE array, employing a carefully designed dataflow and memory-conscious partitioning strategy to balance computational parallelism with the inherent resource constraints of individual AIE tiles. Implementation results show a GNN latency that scales linearly with input size, measured at approximately 1.086 ms for 100 nodes while consuming 21.75% of the AIE array. Functional validation on a hardware testbed confirms the system's operational correctness. This work substantiates the feasibility of leveraging Versal ACAP's heterogeneous compute paradigm to meet the rigorous real-time processing challenges of next-generation high-energy physics experiments.
Speaker: zhao-zhi Liu -
11:05
85 A Low-Dead Time FPGA-TDC for Optical Beam-Loss Monitor System 20m
At the Anhui University infrared free-electron laser (FEL) facility, an optical beam-loss monitor (oBLM) diagnostic system is under development for spatially continuous beam-loss monitoring along the accelerator beamline. To meet the stringent position-resolution requirements of compact accelerators and to mitigate event ambiguity under multi-bunch operation, an optical fibre is installed parallel to the beamline and coupled to photomultiplier tubes (PMTs) at both ends. Cherenkov light generated by beam losses propagates along the fibre to the two PMTs. After conditioning with a transimpedance amplifier and a high-speed comparator, the signals are digitized by an FPGA-based time-to-digital converter (FPGA-TDC) to obtain precise timestamps. The differential arrival times measured at both ends enable accurate reconstruction of the bunch ID and the longitudinal beam-loss position. To satisfy the oBLM requirements on timing precision and short inter-event intervals, we propose a low-dead-time FPGA-TDC architecture. First, by optimizing the clock distribution path, we implement a four-edge, dual-sampling TDC within a single clock domain. Second, we develop an encoding scheme based on sub-event aggregation, which reduces the system-level dead time of the WaveUnion-TDC to one system clock cycle. The proposed design has been implemented on a Xilinx Zynq UltraScale+ ZU3EG development board, achieving a timing precision of approximately 3.34 ps and a dead time of 1.333 ns.
Speaker: Dr Yuchen Yang (Anhui university) -
11:05
86 A Heterogeneous Software Framework Design for the Full Software Trigger at CEPC 20m
The Circular Electron-Positron Collider (CEPC) is a new-generation large-scale particle collider proposed by China, designed primarily to study key particles such as the Higgs boson. It is estimated that CEPC's peak data bandwidth can reach the TB/s level. Currently, the software trigger solution, which bypasses hardware triggering by directly reading all data and performing flexible filtering through software, is considered an important candidate. It not only simplifies hardware wiring but also enhances the flexibility of the trigger strategy. However, it correspondingly increases the demand for online data processing capabilities. Heterogeneous computing, as a crucial acceleration method today, significantly improves processing performance by coordinating different types of processors (e.g., CPUs and GPUs) to handle diverse computational tasks. Given the high compatibility between CEPC's data processing tasks and GPU architecture, it is well-suited for acceleration through heterogeneous computing. This paper introduces a heterogeneous software framework for the CEPC’s online data processing. The framework aims to integrate accelerated computing into the real-time workflow, meeting the high-throughput and low-latency demands of the full software trigger system.
Speaker: xu zhang (The Institute of High Energy Physics of the Chinese Academy of Sciences)
-
10:40
-
11:25
→
12:30
Poster session: Data Acquisition and Trigger Architectures
-
11:25
Design of the Global Timing Unit (GTU) for ePIC 20m
The design of the Global Timing Unit (GTU) for the experiment ePIC at the EIC is described. The ePIC system clock distribution test results are shown using the recently produced GTU engineering article, which includes the GTU base board and the Optic Transceiver Plugin Modules. The Multi-Gigabit link between the GTU and the DAM (ePIC run control command and the DAQ status) is tested. (If the testing time permits and the logistics are available, the ePIC system clock recovered on the DAM/FLX152 and the TOF RDO will be shown. If the testing time permits, the concept of multiple ePIC run control user interfaces will be shown).
Speaker: William Gu (Jefferson Lab) -
11:45
Development of the IDROGEN White Rabbit system for SuperKEKB and future projects at KEK 20m
The clock synchronization among distant FPGA circuits based on the White Rabbit (WR) network is a new technology that will be the standard in future accelerator projects. We plan to employ the IDROGEN carrier board in the SuperKEKB and future projects at KEK. This board was developed at CNRS/IN2P3, IJCLab. Several feasibility studies were carried out jointly by IJCLab and KEK. The IDROGEN-WR combined system can provide synchronized RF signals at individual slave nodes. Its rms noise level is sub-picosecond order. We developed a precise synchronization scheme between the IDROGEN-WR setup output and the RF clock employed in the SuperKEKB operating Low-Level RF system. Besides, the long-term stability of its relative phase was confirmed. The results indicate that SuperKEKB can increase the number of RF signal branches by implementing the IDROGEN carrier board for several synchronization purposes. Also, we plan to develop the distributed DAQ system as another application of the IDROGEN-WR combined system. The sampling clock of the ADC circuit on individual slave nodes is synchronized. We consider this approach as a unified, standalone DAQ system that covers the sensors installed in the large-scale accelerator beamline. Then, the causal relation among the simultaneous events in the accelerator operation is determined. The latest results and prospects of our R&D activities are reported.
Speaker: Hiroshi Kaji -
12:05
A Heterogeneous Software Framework Design for the Full Software Trigger at CEPC 20m
The Circular Electron-Positron Collider (CEPC) is a new-generation large-scale particle collider proposed by China, designed primarily to study key particles such as the Higgs boson. It is estimated that CEPC's peak data bandwidth can reach the TB/s level. Currently, the software trigger solution, which bypasses hardware triggering by directly reading all data and performing flexible filtering through software, is considered an important candidate. It not only simplifies hardware wiring but also enhances the flexibility of the trigger strategy. However, it correspondingly increases the demand for online data processing capabilities. Heterogeneous computing, as a crucial acceleration method today, significantly improves processing performance by coordinating different types of processors (e.g., CPUs and GPUs) to handle diverse computational tasks. Given the high compatibility between CEPC's data processing tasks and GPU architecture, it is well-suited for acceleration through heterogeneous computing. This paper introduces a heterogeneous software framework for the CEPC’s online data processing. The framework aims to integrate accelerated computing into the real-time workflow, meeting the high-throughput and low-latency demands of the full software trigger system.
Speaker: xu zhang (The Institute of High Energy Physics of the Chinese Academy of Sciences) -
12:05
A versatile, high accuracy, pulsed measurement device 20m
Klystron modulators are key elements in free electron lasers. They provide high-voltage
pulses to bias klystron tubes with energies of several hundred joules.
Amplitude variations directly affect the gain and phase of amplified RF pulses and therefore
the accelerating fields created by RF cavities.
For machines such as the SwissFEL (Swiss Free Electron Laser), the required HV pulse
stability must be better than 15 ppm (parts per million).
Stability is calculated from 100 pulses up to 100 Hz as the relative standard deviation
of gated averages around the pulse's flat-top region, where the RF pulse is amplified.
Measuring such small variations often uses pulse offsetting technique and magnifying
the flat-top region for better quantisation resolution. This requires low-noise
analog electronics like summing amplifiers and clippers with adequate bandwidth and
settling time. Such a measurement setup usually uses an external differential
amplifier and a high-performance oscilloscope with statistical analysis to measure stability.
Our approach uses a RedPitaya STEMlab 125-14 board connected
to a custom signal conditioner board developed at PSI (Paul Scherrer Institut) to monitor stability
in klystron modulators. It captures both pulse current and voltage in real time,
and analyzes only the voltage for stability metrics. The concise design fits easily
into modulator cabinets for ongoing, precise performance checks.
Our system called the PMU (Pulse Measurement Unit) achieves a resolution limit of 5-6 ppm
at a 1 microsecond measurement gate, exploiting 67% of the ADC full-scale range.
We present the complete system and report on our initial
operational experience.Speaker: Mario Jurcevic -
12:05
CODA SRO DAQ with Real-Time Data Processing Components 20m
The CODA streaming readout (SRO) data acquisition system has been enhanced with native support for the EJFAT transport protocol, enabling scalable, loss-tolerant, and high-throughput data movement tightly coupled to real-time processing. This effort introduces firmware extensions to the CODA VTP that segment crate-level aggregated data windows into UDP packets, enrich them with EJFAT load balancing and reassembly metadata, and transmit them to a remote EJFAT load balancing service for dynamic traffic distribution.
On the receiving side, a new CODA event builder component ingests multiple EJFAT UDP streams originating from distributed readout crates, performs deterministic reassembly of the original high-rate readout frames, and coherently aggregates data across the full detector. For each SRO time tick, fully assembled detector frames are delivered to a dedicated online processing pipeline optimized for low-latency data cleaning, compression, and tiered storage.
The processing pipeline is designed for extensibility, allowing additional real-time algorithms, including calibration, event reconstruction, and data quality monitoring, to be composed and executed online. This provides prompt feedback to experimental operations and enables data-driven steering during data taking. The pipeline is implemented using the ERSAP flow-based, reactive actor framework, whose distributed and asynchronous execution model integrates naturally with the CODA architecture. The upgraded CODA SRO system and its real-time processing pipelines have been commissioned on local test platforms, and deployment is underway for operation with the CLAS12 detector in full SRO mode.Speaker: David Abbott -
12:05
Design and Results of a Radiation Test System for SEE and TID Assessment of the VLAST-P CsI(Tl) Calorimeter Readout Electronics 20m
The Very Large Area gamma-ray Space Telescope-Pathfinder (VLAST-P) operates in a low-Earth orbit at an altitude of approximately 500 km, where exposure to space radiation poses reliability challenges for its readout electronics. Accordingly, radiation-induced single-event effects (SEE) and total ionizing dose (TID) degradation require systematic evaluation.To address this, a ground-based heavy-ion irradiation test system was developed for SEE and TID assessment of the VLAST-P calorimeter readout electronics. The FPGA-based system adopts a radiation-decoupled modular architecture, enabling real-time SEE detection and protection as well as quantitative evaluation of TID-induced performance degradation. An adaptive characterization method for SEL recovery, combined with a global timestamp mechanism, is implemented to support quantitative recovery analysis.Heavy-ion irradiation results show that no SEL events were observed in either the mixed-signal AD9266 or the analog THS4524, while 45 SEU events were recorded in the AD9266. TID testing up to 10 krad resulted in increased noise of the THS4524 from 75.6 to 94.7 mV and a DNL increase of the AD9266 from 0.25 to 0.27 LSB.
Speaker: JiaAo Zhang (University of Science and Technology of China) -
12:05
Development of a Coincidence Measurement System for Scattered and Decay Particles Using a Streaming DAQ and Digitizers 20m
In experiments using the ultra-high-resolution magnetic spectrometer Grand Raiden at the Research Center for Nuclear Physics (RCNP), the University of Osaka, several projects are currently underway to integrate digitizers for waveform acquisition from silicon detectors. Scattered particles are detected by the focal-plane detectors of Grand Raiden to determine the excitation energy, and decay particles from excited nuclei are measured by silicon detectors, and those signal waveforms are acquired using digitizers.
Therefore, it is necessary to develop a method for coincidence measurements of scattered particles and decay particles using both the focal-plane detectors and silicon detectors in experiments with Grand Raiden at RCNP.
Establishing this measurement technique will expand the experimental scope of Grand Raiden at RCNP and provide an essential experimental foundation.The data acquisition (DAQ) system has recently undergone a significant transition. While the system for the focal-plane detectors has been upgraded to a triggerless streaming DAQ to ensure dead-time-free data collection, waveform acquisition for the silicon detectors still relies on a conventional trigger-based DAQ.
To synchronize these two distinct architectures, an analog trigger is generated from the focal-plane detector signals and fed into the silicon detector DAQ. In addition, the "accepted" signals from the silicon DAQ are fed back into the streaming DAQ to serve as reference signals.
The system matches the precise timing information of these reference signals and integrates the data from both systems to reconstruct individual physical events.
This poster presents the development of this hybrid integration method and provides an evaluation of its performance.Speaker: Shotaro Maesato (The University of Osaka) -
12:05
Exploration on Quasi-Steady-State Real-Time Data Access for EAST Tokamak 20m
The EAST (Experimental Advanced Superconducting Tokamak) facility is designed to achieve high-performance steady-state operation. Its data acquisition system provides unified data acquisition and long-term data storage for various diagnostic systems, primarily supporting offline data analysis. With the advancement of long-pulse operation on EAST, researchers require access to real-time data during the long-pulse operation, for the live monitoring of key signals and even for performing simple real-time data processing. To address this, an exploration aiming at quasi-steady state real-time data transmission within the data acquisition framework, and a prototype system has been developed for tests. The prototype consists of DAQ console, DAQ unit and data service, utilizing ZeroMQ for communication. The DAQ console manages acquisition configurations and control the workflow. Each DAQ unit acquires diagnostic data in real-time and transmits it, formatted in a time-sliced format, to the data service module using a request/reply pattern. Upon receipt, the data service immediately publishes the data via a publish/subscribe pattern while simultaneously storing it for long-term archive. Clients can subscribe to specific signals to receive real-time streams for visualization or basic analysis, they can retrieve complete datasets through the offline data service. Initial feasibility has been demonstrated through a successful 1000-second test, and further evaluation with the prototype system are currently in progress.
Speaker: Ying Chen -
12:05
FPGA-Accelerated Pattern Recognition for the ATLAS Event Filter at the HL-LHC 20m
The High-Luminosity Large Hadron Collider (HL-LHC) will deliver a five- to seven-fold increase in instantaneous luminosity relative to the original LHC design, and approximately a three-fold increase compared to Run-3 operation, significantly increasing detector readout volumes and placing substantially higher demands on the trigger and data-acquisition systems. To meet these challenges, the ATLAS Event Filter Tracking group is evaluating heterogeneous computing platforms to reduce the size, cost, and power consumption of the event-processing farm. Options where key algorithms of the processing are offloaded to either FPGA or GPU accelerator cards are compared directly to a traditional CPU-only farm. This contribution presents an FPGA-based implementation of the pattern-recognition stage in the tracking pipeline, developed using High-Level Synthesis and integrated within the ATLAS Athena software framework using the OpenCL cross-platform programming model. We describe the architecture, firmware design choices, and workflow for hardware-software co-execution. Performance studies compare physics efficiency and throughput of FPGA implementation against other technologies, demonstrating the potential of FPGA acceleration for the HL-LHC Event Filter.
Speaker: Priya Sundararajan (University of California Irvine (US)) -
12:05
Heterogeneous Acceleration of Graph Neural Networks on Versal ACAP for High-Level Trigger Track Reconstruction 20m
The next generation of high-luminosity collider experiments, such as the HL-LHC and CEPC, will generate data streams at terabyte-per-second, imposing extreme real-time processing demands on trigger and data acquisition (TDAQ) systems. Online track reconstruction is pivotal for effective data reduction, transitioning TDAQ from simple filtering to precision selection. Graph Neural Networks (GNNs) have emerged as a powerful, data-driven solution for this task due to their natural alignment with particle track data structures. However, deploying GNNs in online systems requires stringent adherence to latency, throughput, and power constraints, which necessitates dedicated hardware acceleration beyond the limitations of CPUs and GPUs. This paper proposes and demonstrates a comprehensive hardware-software co-design approach, implementing a complete real-time GNN-based track-finding pipeline on the AMD Versal ACAP. Our methodology maps the algorithm onto the Versal VCK190's heterogeneous architecture. The graph construction phase, utilizing an ε-nn algorithm, is implemented in the PL incorporating an LUT to accelerate neighborhood searches. The core denoising GNN model is parallelized and deployed onto the AIE array, employing a carefully designed dataflow and memory-conscious partitioning strategy to balance computational parallelism with the inherent resource constraints of individual AIE tiles. Implementation results show a GNN latency that scales linearly with input size, measured at approximately 1.086 ms for 100 nodes while consuming 21.75% of the AIE array. Functional validation on a hardware testbed confirms the system's operational correctness. This work substantiates the feasibility of leveraging Versal ACAP's heterogeneous compute paradigm to meet the rigorous real-time processing challenges of next-generation high-energy physics experiments.
Speaker: zhao-zhi Liu -
12:05
MicroTCA-based low latency data streaming and processing architecture using UDP offload and Time Sensitive Networking 20m
As experimental fusion devices transition from shot-based operation to longer pulse duration, diagnostic and control systems need to evolve to support this kind of operation. This work proposes a distributed data acquisition architecture that aims to provide ultra-low latency communications based on full hardware User Datagram Protocol (UDP) offloading, leveraging the emerging IEEE Time Sensitive Networking (TSN) technologies for deterministic latency and synchronization, and providing an edge computing platform that allows the implementation of Digital Signal Processing (DSP) algorithms and Artificial Intelligence (AI) real-time applications.
The solution proposes the use of the Micro Telecommunications Computing Architecture (MTCA) standard as the foundation and is entirely built from Commercial Off-The-Shelf (COTS) components. The core device is an Advanced Mezzanine Card (AMC) form-factor board based on an AMD Zynq UltraScale+ MPSoC, which enables heterogeneous processing through its integrated ARM processing system and programmable logic. For connectivity, an FPGA Mezzanine Card (FMC) module with dual Small Form-factor Pluggable (SFP+) cages uses the device high-speed transceivers to establish a 10 Gigabit Ethernet (10GbE) fiber optic link, on top of which a UDP offload engine is used to ensure efficient data streaming to external systems.
Speaker: ALEJANDRO PIÑAS-HIGUERUELA (Universidad Politecnica de Madrid) -
12:05
Real Time Data Acquisition for Atmospheric muography based on the Thin Gap Chamber 20m
Typhoons, as high-energy weather systems, rely heavily on the dynamic evolution of internal refined structures (e.g., eyewall pressure gradients and core density distributions) for accurate intensity forecasting. However, inherent limitations in the penetration depth and spatial resolution of conventional atmospheric density imaging techniques preclude the direct observation of internal refined structures of typhoons. Considering the correlation between the flux of cosmic-ray muons and atmospheric density, this study developed a system to achieve atmospheric muography, providing critical data support for the retrieval of real-time pressure field distribution in the typhoon core region. The developed system employs a 384-channel Thin Gap Chamber (TGC) detector array, comprising two identical double-layer TGC units. Each unit includes an orthogonal readout structure composed of 96 anode wires and 96 cathode strips. The data acquisition system adopts a distributed architecture, where two FEBs and one DAQ Board are combined to implement muon event selection for multi-layer detectors. The final three-dimensional muon track reconstruction was completed offline. This prototype system has been deployed on the "TongJi • Marine No.1" Observation Tower in the East China Sea, enabling continuous real-time data acquisition. The preliminary test results will be reported in the meeting.
Speakers: Ms Jia Dong (Shandong University), Ms Xiaohan Sun (Shandong University), Mr Xian Li (Shandong University), Mr changyu Li (Shandong University), Ms Yuying Li (Shandong University), Chengguang Zhu (Shandong University (CN)), Dr Kang Jia (Shandong University), Kun Hu (Shandong University (CN)) -
12:05
Regional reconstruction of tracks for the ATLAS Event Filter using GPU accelerators 20m
The upcoming high-luminosity phase of the LHC (HL-LHC) presents several challenges for the ATLAS experiment's Trigger and Data Acquisition system, necessitating a full upgrade of the system. A key challenge for the Event Filter, where high-level event reconstruction and final event selection will run at 1 MHz, lies in the computational demand for online track reconstruction within the Inner Tracker in selected regions of interest at the full trigger rate and of the full tracker acceptance at 150 kHz. Over the past few years, extensive research has been conducted into utilising hardware accelerators in the ATLAS Event Filter system to improve tracking throughput and reduce full-system power consumption. Various end-to-end track reconstruction pipelines have been developed using GPUs and FPGAs. These pipelines demonstrate their capabilities by offloading different amounts of the computing load to the accelerators.
This contribution focuses on developments and optimizations for GPU-based track reconstruction in regions of interest. The scaling of throughput and latency with the size and occupancy of regions of interest has been studied for GPU-based tracking pipelines originally designed for efficiently reconstructing full tracker acceptance simultaneously. Different approaches for improving the utilization of the GPU resources for the smaller regions are presented and compared to the full acceptance algorithms as well as the CPU counterparts.
Speaker: Benjamin Michael Wynne (The University of Edinburgh (GB)) -
12:05
Streaming Readout DAQ Systems for High-Luminosity Electron-Beam Experiments 20m
Abstract
Streaming readout (SRO) data acquisition is emerging as a key paradigm for high-luminosity nuclear and particle physics experiments, enabling triggerless operation with continuous, time-stamped data streams processed in software. By reorganizing detector hits into time slices and performing real-time reconstruction and selection using global detector information, SRO allows efficient data reduction and the integration of advanced AI-based algorithms on heterogeneous computing platforms. This approach has been adopted by several future experiments, including SOLID at Jefferson Lab and ePIC at the Electron–Ion Collider.
At Jefferson Lab, SRO architectures based on distributed microservices are being developed and validated through dedicated testbed activities under realistic operating conditions. These testbeds are used to assess system scalability, timing synchronization, network throughput, and real-time data reduction strategies, including both conventional compression and machine-learning–based methods. In particular, the STREAM-AI framework provides a flexible platform for prototyping and benchmarking smart online algorithms, such as autoencoder-based compression and AI-assisted filtering, across the full DAQ chain. These efforts inform the design of robust, scalable, and AI-enabled DAQ systems for next-generation high-rate experiments such as ePIC.Speaker: Fabio Rossi (INFN) -
12:05
The architecture of the Level-0 Global Event Processor at ATLAS 20m
The High-Luminosity upgrade of the Large Hadron Collider (HL-LHC) at CERN will increase the instantaneous proton-proton collision rate by a factor of 3-4 relative to the current LHC, delivering an integrated luminosity of approximately 3000-4000 fb-1 over its operational lifetime. This substantial increase in collision rate, pile-up, and event complexity requires a complete redesign of the ATLAS trigger and data acquisition (TDAQ) system.
A key component of this redesign is the Global Event Processor (GEP), which performs low-latency event selection, filtering, and routing in real time. Following the upgrade, the ATLAS detector is expected to generate data at rates approaching 128 Tb/s, necessitating aggressive real-time data reduction to meet storage and bandwidth constraints.
The upgraded TDAQ architecture employs a distributed processing model in which collision events are delivered every 25 ns to a pool of FPGA-based GEP units in a round-robin fashion. Detector data is compressed to approximately 60 Tb/s for optical transmission and decompressed upon arrival at the GEP. Data arrival latencies vary from 3.18 µs to 6.26 µs after a collision due to heterogeneous detector technologies. Within each GEP, event processing is performed by a directed acyclic graph (DAG) of algorithm processing units (APUs) operating under a strict latency constraint of 7.66 µs after the collision event.
This paper presents a novel modular architecture enabling asynchronous data arrival, buffering, synchronization, and high-throughput processing across networks of streaming APUs, suitable for implementation in single- or multi-die ASICs or FPGAs.
Speaker: Mayan Tamari (Brookhaven National Laboratory (US)) -
12:05
Timing and Slow Control backend for CMS Drift Tube On Board electronics 20m
Commissioning of the detectors for the High‑Luminosity Large Hadron Collider (HL‑LHC), also referred to as the Phase‑2 upgrade, is planned for the period 2026–2028 at CERN. In this framework, the readout and control electronics of the Drift Tube (DT) subdetector of the CMS experiment have undergone a complete redesign. This upgrade is being carried out to cope with the expected increase in event rates and to ensure compatibility with the upcoming Trigger and Timing Control Distribution System (TCDS2). In parallel with this, a new back‑end system for timing distribution and slow control has been developed. This system is based on a set of FPGA‑based Advanced Telecommunications Computing Architecture (ATCA) boards onto which custom firmware has been deployed. Each board is connected to the server through a network interface and provides approximately one hundred high‑speed optical links implementing the Low Power Gigabit Transceiver (LpGBT) protocol, used to distribute machine‑synchronous timing signals and slow‑control commands to the detector electronics. The VHDL firmware integrates intellectual property (IP) cores, makes use of the SLAC Ultimate RTL Framework (SURF), and incorporates dedicated modules specifically designed to interface and manage custom on‑detector electronics. Moreover, a Python software library has been developed to configure, monitor, and control the boards through a dedicated Reliable User Datagram Protocol (RUDP) connection between the boards and the server.
This contribution will describe the overall design of the system, detail the implementation at the firmware level, and present the results of the benchtop and in‑field tests performed to date.Speaker: Antonio Bergnoli (Universita e INFN, Padova (IT))
-
11:25
-
11:25
→
12:30
Poster session: Real Time Diagnostics, Digital Twin, Control, Monitoring, Safety and Security
-
11:25
Development of a Portable Real-Time Radiation Measurement Technology for Digital Twin Based Plant Situation Monitoring 20m
Abstract Digital twin technology is considered an ideal solution for optimizing nuclear power plant operation and enabling situation responsive monitoring under various scenarios. Korea Hydro & Nuclear Power has deployed a web-based digital twin of the APR1400 nuclear power plant as part of its plant digitalization strategy. To enhance the applicability of digital twin systems, the use of portable radiation measurement devices is required in addition to conventional fixed radiation monitoring instruments. Such portable systems support situation-responsive monitoring and enable the validation and updating of three-dimensional plant models. In this study, a digital twin based monitoring framework was demonstrated by combining LiDAR based three-dimensional plant model comparison with portable radiation measurements. Due to security constraints at nuclear power plants, the proposed approach was investigated at an accelerator-based monoenergetic neutron facility at the Korea Research Institute of Standards and Science. Radiation measurements were performed using a newly developed phoswich type detector composed of a CLYC scintillator and a plastic scintillator, designed for efficient portable measurements. The results demonstrate the feasibility of applying portable radiation detection systems to digital twin based monitoring for enhanced situational awareness in nuclear facilities.
Acknowledgment
This work was supported by the Korea Research Institute of Standards and Science (KRISS) (Grant No. GP2025-0008-01), the National Research Council of Science & Technology (NST) funded by the Ministry of Science and ICT (MSIT) (No. GTL25051-000), and the National Research Foundation of Korea (NRF) funded by MSIT (No. RS-2025-02315930).Speaker: HyeoungWoo Park (Central Research Institute, Korea Hydro & Nuclear Power Co., Ltd.) -
11:45
Enhancements of the central Safety System for Wendelstein 7-X operational phase OP2.4 20m
EEnsuring the safety of personnel, the environment, and equipment during operation of the Wendelstein 7-X (W7-X) superconducting fusion experiment is a fundamental legal requirement. A central objective of the W7-X development is therefore to reduce the inherent hazard potential through technical, organizational measures and personal safety measures to a level at which the remaining residual risk is acceptable.
The safety control systems of W7-X are designed in accordance with IEC 61511 (functional safety for the process industry).
Since the start of W7-X operation in December 2015, the cSS has been continuously adapted to meet increasing safety requirements, mainly driven by the integration of new technical systems and diagnostics. During previous conversion phases, however, the cSS was shut down.
To reduce or largely eliminate extensive organizational measures, the cSS is being modified for operational phase OP2.4 to enable safety-relevant operation also during conversion and assembly phases by implementing appropriate safety functions.This paper presents the architecture and core functions of the cSS, followed by a detailed description of the modifications introduced to support safety operation during assembly phases. Newly implemented safety functions, special operating modes, and modifications to the interface cabinets of the cSS for OP2.4 are described. Finally, first results from the commissioning of the enhanced cSS are discussed.
nally, first results from the commissioning of the enhanced cSS are discussed.Speaker: Jörg Schacht -
12:05
A Low-Dead Time FPGA-TDC for Optical Beam-Loss Monitor System 20m
At the Anhui University infrared free-electron laser (FEL) facility, an optical beam-loss monitor (oBLM) diagnostic system is under development for spatially continuous beam-loss monitoring along the accelerator beamline. To meet the stringent position-resolution requirements of compact accelerators and to mitigate event ambiguity under multi-bunch operation, an optical fibre is installed parallel to the beamline and coupled to photomultiplier tubes (PMTs) at both ends. Cherenkov light generated by beam losses propagates along the fibre to the two PMTs. After conditioning with a transimpedance amplifier and a high-speed comparator, the signals are digitized by an FPGA-based time-to-digital converter (FPGA-TDC) to obtain precise timestamps. The differential arrival times measured at both ends enable accurate reconstruction of the bunch ID and the longitudinal beam-loss position. To satisfy the oBLM requirements on timing precision and short inter-event intervals, we propose a low-dead-time FPGA-TDC architecture. First, by optimizing the clock distribution path, we implement a four-edge, dual-sampling TDC within a single clock domain. Second, we develop an encoding scheme based on sub-event aggregation, which reduces the system-level dead time of the WaveUnion-TDC to one system clock cycle. The proposed design has been implemented on a Xilinx Zynq UltraScale+ ZU3EG development board, achieving a timing precision of approximately 3.34 ps and a dead time of 1.333 ns.
Speaker: Dr Yuchen Yang (Anhui university) -
12:05
Data flow management and service of Lingshu plasma control system 20m
To achieve the complex control of high-temperature, nonlinear fusion plasmas, the overall control task is decoupled into multiple parallel sub-tasks. To host these parallel tasks, the modular Lingshu plasma control architecture is designed as shown in Figure 1. At its core, a unified data flow management system is implemented within this architecture to coordinate all parallel components, thereby enabling integrated control over the plasma discharge.
This data flow management framework centers on a Data Engine, which uniformly manages intra-component data through a key-value mapping structure , as shown in Figure 2. It integrates a Shared Memory Manager (ShmMgr) to provide efficient, zero-overhead data exchange between components. Building upon this architecture, the system features two critical data services: a Just-In-Time Configuration Distribution Service, which dynamically resolves and delivers control parameters within each sub-millisecond control cycle (its operating principle is shown in Figure 3), and a Real-time Data Archiving Service designed for long-pulse operations, which ensures persistent storage without interfering with deterministic real-time performance (its workflow is illustrated in Figure 4).
The proposed framework has been validated through extensive experimental campaigns on the EAST tokamak. Configuration resolution and data archiving introduce minimal overhead (<20µs and <12µs, respectively, as detailed in Figure 5 and Figure 6), meeting strict real-time requirements. A simulated 25-hour discharge, illustrated in Figure 7, further confirmed its steady-state operational capability.Speaker: jq zhu -
12:05
Data Orchestration and Large Model-Driven Interactive Diagnosis for LHAASO Operational Data 20m
The Large High-Altitude Air Shower Observatory (LHAASO) generates massive multi-dimensional operational data in the course of continuous operation, posing prominent challenges to efficient data management and rapid analytical decision-making. To address these challenges, this paper proposes a comprehensive framework integrating data orchestration, intelligent processing, and interactive diagnosis. It classifies LHAASO data by defining the core concepts of "event" and "parameter," realizing the structured organization of heterogeneous data. A hybrid architecture combining persistent storage and stream processing is designed to ensure the reliable integration, high-capacity storage, and low-latency retrieval of TB-level data, with a minimum response time reaching the millisecond level. Based on this, a large model-driven intelligent analysis module is developed: users input natural language requirements via a web interface, and the system can automatically generate data processing code, execute tasks, and visualize results, effectively lowering the technical threshold for non-experts. Experiments demonstrate that the framework can process data efficiently, the large model can enhance analytical convenience, and the system can complete anomaly detection and rapid positioning within seconds, providing an intelligent solution for LHAASO and a reference for the management, analysis, and intelligent development of massive data in large scientific facilities.
Speaker: Huang Li -
12:05
Deep Fusion Attention Transfer Learning Method for Rotating Machinery Fault Diagnosis Based on Two Stage Neural Network 20m
The fusion reactor relies on multiple auxiliary equipment clusters such as vacuum, low temperature, and water cooling to work together. How to ensure effective monitoring and fault diagnosis of these devices is the key to the stable operation of the fusion experiment.Under different working conditions, there are obvious differences in the fault characteristics of rotating machinery, which directly leads to the failure of the model trained under a single working condition. Considering that there are often multiple working condition data sets in real scenarios, how to make full use of multi-source data sets to ensure model generalization performance becomes the key to fault diagnosis. Aiming at the problems of limited depth, lack of feature extraction ability and poor generalization ability of current multi-source models, an improved deep transfer learning algorithm called Two-stage Deep Fusion Attention Transfer Network (TS-DFATN), is studied and proposed. Firstly, the algorithm preprocesses the data sets of different working conditions, and obtains the image-based feature representation through time-frequency analysis. At the same time, a two-stage neural network was designed to extract features. The primary network was used to extract common features from different datasets, followed by multiple sub-networks to extract distinctive features from each dataset. In order to effectively extract differential features, a dual attention network was added to the subnetwork to enhance the capability of extracting local features. This method was tested on a public bearing dataset, and the results showed that the method improved the accuracy and generalization ability of the model.
Speaker: Shaoqing Liu (Institute of Energy Hefei Comprehensive National Science Center) -
12:05
Design and Implementation of a Distributed Detector Control System for the JUNO Experiment 20m
Abstract: The Jiangmen Underground Neutrino Observatory (JUNO) is a multi-purpose neutrino experiment featuring the world's largest liquid scintillator detector. With a 20 kton target mass instrumented by over 17k 20-inch and 25.6k 3-inch photomultiplier tubes, this sophisticated apparatus enables unprecedented precision in neutrino measurements. To ensure stable operation of this complex system, a robust and scalable distributed Detector Control System (DCS) has been implemented. The core acquisition software for subsystem IOCs (ASIOCs), developed using EPICS and Java, employs distributed algorithms for data partitioning and parallel collection. This architecture efficiently handles the enormous data flow from approximately 40 million channels, storing it in a centralized database. Deployed on the Cloud Fusion Platform (CFP), the system leverages its capabilities in large-scale workload deployment and distributed network storage to address the scaling challenge. After 10 months of stable operation, the system has demonstrated the reliability, robustness, and scalability required for JUNO's long-term data-taking campaign. This paper details the architecture design and functional implementation of the distributed control system, presents the test results on CFP, and discusses potential upgrades with AI capabilities for future enhancements.
Speaker: Mei YE (IHEP) -
12:05
Development of algorithms for the end-end system simulation, mode analysis and performance optimization for a generic and multi-tokamak high-frequency magnetic diagnostic system 20m
An innovative end-to-end system modelling and analysis tool has been developed and applied to test on simulated and real data the measurement capabilities of a generic high frequency (HF) magnetic diagnostic system. This software package can run in real-time (RT) and post-pulse, and it is applicable to both discrete and extended inductive magnetic sensors. This package can run in a multi-tokamak environment through interfacing with the EuroFusion IMAS and DEFUSE frameworks, and it is applicable to both existing (TCV, JET), and foreseen (ITER, DTT, DEMO) tokamaks.
The goals of this RT-compatible package are: (1) generate synthetic data to test the HF magnetic diagnostics and general analysis algorithms; (2) obtain estimates of the intrinsic measurement uncertainties and assess the actual vs. intended system measurement performance for correctly detecting individual components in the frequency spectrum of HF magnetic instabilities; (3) analyse HF coherent modes to provide their most important and RT-relevant observables: frequency, amplitude and mode numbers. This is essential for RT applications where, as an example, we wish to detect the onset of a magnetic island and then determine whether, and which, corrective actions must be taken to stabilize the discharge.
This tool has been applied on actual and simulated data extracted from a large database of magnetic instabilities observed in TCV and JET.
The algorithm has then been adapted to the existing ITER magnetic diagnostics, testing their measurement capabilities, specifically for RT applications. A test application is also developed for the magnetic diagnostic system currently foreseen for DEMO and DTT.Speaker: Dr Duccio Testa (EPFL) -
12:05
Distributed Modeling and Simulation Methods for Tokamak Simulation Systems 20m
Tokamak devices are highly complex with many subsystems, making traditional independent physical models insufficient for the simulation and operation requirements of future fusion reactors. Accurate characterization of the device’s dynamic response requires collaborative modeling by researchers from different subsystems under a unified standard to support complex discharge analysis.
In the context of long-term, multidisciplinary, and multi-team collaborative modeling, tokamak simulation platforms face more stringent requirements. On the one hand, they must support independently development of subsystem models while allowing seamless integration into a unified system-level simulation framework. On the other hand, to accommodate differences in subsystem response frequencies, the platform must support asynchronous simulation and time coordination across multiple frequencies and time scales. Meanwhile, a flexible and scalable model and parameter management and sharing mechanism is needed to support large-scale collaborative modeling and continuous model iteration.
This paper researches and develops a distributed collaborative modeling and simulation solution for digital tokamas. The proposed solution, based on the unified modeling specifications and operating platform of the Plasma Control Simulation Verification Platform (PCSVP), realizing standardized access and interconnection interfaces for subsystem models. It addresses co-simulation synchronization challenges caused by heterogeneous subsystem response frequencies via a multi-rate asynchronous data management mechanism. Furthermore, it establishes a traceable, reusable, and low-threshold collaborative development and operation system for large-scale tokamak simulation models through asset management of models and parameters and collaborative toolchains, thereby significantly improving the integration efficiency and overall modeling iteration efficiency of tokamak simulation models.Speaker: Heru Guo (Institute of Energy, Hefei Comprehensive National Science Center) -
12:05
Electronics for Two-Dimensional Magnetic-Field Measurement in a Polarized-Light Helium Optically Pumped Magnetometer 20m
Helium optically pumped magnetometers (He-OPMs) offer high sensitivity, wide bandwidth, and room-temperature operation, making them suitable for precision magnetic-field measurements. When operated with linearly polarized optical pumping, the light propagation direction and polarization axis provide two intrinsic reference vectors, allowing orientation-related information beyond scalar magnetometry. By applying an oscillating magnetic field at the Larmor frequency, such information can be extracted from the harmonic components of the transmitted optical signal, whose amplitudes and phases reflect the magnetic field relative to the optical reference frame. In this work, dedicated FPGA-based digital signal processing electronics are designed for a linearly pumped He-OPM. The architecture enables real-time detection of the DC component as well as the fundamental and second harmonics, avoiding complex analog demodulation. Experimental results obtained under two-dimensional magnetic-field configurations are presented and show good agreement with the proposed model and analysis approach.
Speaker: Xuan Wang (University of Science and Technology of China) -
12:05
FPGA-based online monitoring for triggerless readout system in dark matter experiment 20m
Triggerless data acquisition (DAQ) systems are increasingly adopted in dark matter experiments to cope with high detector granularity and continuous data streams while avoiding biases in event selection. In this approach, independent front-end readout electronics operate on each channel in continuous free-running mode, using individual digitization thresholds (self-triggers) rather than global trigger decisions. Event building is implemented in software, where timestamps and temporal coincidences between front-end signals are used to define events. Such architectures require a control and monitoring system capable of tracking, in real time, data transfers, timing signals, status flags, and error conditions without interfering with the normal data flow.
We present a general purpose FPGA-based monitoring system designed for triggerless DAQ architectures and optimized for the specific requirements of dark matter detectors. The system is implemented in hardware using auxiliary boards hosting FPGA devices that interface with front-end digitizers and data concentrators. These boards process monitoring signals such as buffer occupancy, busy and flow-control flags, timing and synchronization signals, and error conditions during both science data taking and calibration runs, enabling continuous checks of DAQ performance and stability.
The proposed solution provides a flexible and scalable design suitable for next-generation dark matter detectors employing triggerless readout. The main features of the implementation are described together with the test bench and the DAQ environment used for system validation. A specialized version of this system has been installed in the Neutron Veto detector of the XENONnT experiment at the Laboratori Nazionali del Gran Sasso (LNGS).Speaker: Dr Stefano Mastroianni (INFN Napoli) -
12:05
High‑Voltage System control for the ICARUS experiment at Fermilab 20m
The ICARUS-T600 detector is a large liquid argon time projection chamber (LAr-TPC) installed at Fermilab within the Short-Baseline Neutrino (SBN) program. Its operation depends on a highly stable and precisely controlled high-voltage (HV) system that generates the uniform electric field required for ionization charge drift and collection. The system supplies –75 kV to a central cathode, producing an electric field of 500 V/cm across a 1.5 m drift distance.
The HV infrastructure consists of an external power supply, custom cryogenic feedthroughs, and a resistive voltage divider chain that linearly distributes the potential along the field cage. A fully automated ramp-up and ramp-down capability is implemented through a remote PC-based control system using an Ethernet analog interface (EDAS-1000), allowing accurate configuration of voltage setpoints, ramp rates, and safety thresholds.
Continuous real-time monitoring ensures that voltage and current remain within operational limits, while automatic interlocks and logging guarantee safe operation. The system has demonstrated excellent stability, maintaining fluctuations within ±25 V at –75 kV (about 3 × 10⁻⁴ relative variation) with no measurable leakage current. Controlled ramping and careful electrical design minimize stress and thermal load, ensuring reliable long-term operation under cryogenic conditions.
Since its commissioning in 2020, the HV system has supported cosmic-ray calibration and continuous neutrino data taking, achieving signal-to-noise ratios above 5 and electron lifetimes exceeding 3 ms. In late 2023, the control software was upgraded with an automatic emergency ramp-down triggered by power outages or interlock signals, a feature validated through extensive testing and stable GUI operation.
Speaker: Antonio Gioiosa (University of Molise & INFN Roma Tor Vergata) -
12:05
Low-latency digital control of an all-in-fiber phase noise reduction loop for VIRGO Squeezer 20m
The “all in fiber” phase noise cancellation loop demonstrator makes use of some pigtailed mirrors, beam splitters and photodetectors to sense phase deviations, and an acousto-optic modulator (AOM) to apply proper frequency shift to the reference beam of VIRGO detector’s squeezer. This compensates, to a certain extent, environmental factors affecting the phase of light along its path.
The setup used to employ a bench-top analog RF generator with external frequency modulation capability to drive the AOM. Such instrument has recently been replaced with a direct digital synthesizer, improving system integration and reconfigurability, with the substantial advantage of an overall lower phase noise profile of the modulated RF tone. Moreover, the settling time for frequency steering has also been reduced by almost an order of magnitude with respect to the original circuit, allowing to extend the noise reduction bandwidth to 100 kHz. An ADC bridges the analog and numerical domains, and a specific firmware implements the logic for frequency tuning on an FPGA. The loop compensation, once put in place with an analog PID module and tuned with Ziegler-Nichols method, has been transposed to the digital domain by “emulation”, using Tustin’s bilinear transform to obtain the coefficients of an IIR filter.
The contribution briefly explains the working principle of the optical setup, reports on the ancillary RF electronics, and especially discusses the control strategy and its implementation. Latencies, quantization effects and other limitations are also discussed, highlighting further improvements to be made.Speaker: Marco Toffano (Universita e INFN, Padova (IT)) -
12:05
Shape-factor analysis of real-time, in-situ photon-depth spectra for land quality β--emitter assay 20m
A spectral analysis method is presented to determine depths at which strontium-90 and daughter isotope yttrium-90 arise, based on their bremsstrahlung contributions to the 500-1000 keV energy region. The spectra used to demonstrate this approach were recorded real time, in-situ, in blind tubes with a cerium bromide detector comprised of a Ø10 mm x 9.5 mm crystal coupled to a full-featured MCA Topaz-SiPM digitizer. Theoretical bremsstrahlung accounting for attenuation in-situ is also presented.
Speaker: Kate Williams (Lancaster University) -
12:05
The ATLAS Forward Proton (AFP) sub-detector and the adverse effects of radiation from Large Hadron Collider (LHC) collimators during pp-collisions at 13.6 TeV 20m
The ATLAS Forward Proton (AFP) sub-detector, located in the forward region of the ATLAS experiment, between ~205m and 218m from ATLAS Interaction Point 1 (IP1) and in close proximity of the TCL6 aperture defining collimator (~219m) within the Large Hadron Collider (LHC), has faced significant operational challenges during Run-3 pp-collisions at √s = 13.6 TeV due to elevated and increasing radiation exposure as integrated luminosity grows.
This contribution presents the impact of this environment on AFP detector components, as well as on personnel performing maintenance and component replacement during short accesses. Radiation-induced effects such as increased leakage current, detector resolution and efficiency degradation, single-event upsets and failures in power supplies and front-end electronics have been studied using monitoring data and simulation. The high dose rates also constrain hands-on intervention times and require careful planning to keep individual and collective doses within regulatory limits.
Possible mitigation strategies will be discussed, including adjustments to collimator settings, optimized shielding for electronics and personnel, relocation or consolidation of equipment, and the use of more radiation-tolerant components for future operation. The experience with AFP underlines the need for a comprehensive radiation impact assessment when designing and locating forward detectors in high-radiation environments.
Speaker: Marko Milovanovic (Justus-Liebig-Universitaet Giessen (DE)) -
12:05
UPGRADED CONTROL SYSTEM FOR THE ELECTRON CYCLOTRON HEATING EXPANSION SYSTEM ON EAST 20m
In the field of nuclear fusion research, the Electron Cyclotron Resonance Heating (ECRH) system is crucial for plasma heating and current drive. To enhance the heating capability from 2 MW to 3.5 MW and extend the pulse width to 1000 seconds, an upgrade of the ECRH system on the Experimental Advanced Superconducting Tokamak (EAST) is undertaken, focusing on the addition of #5 and #6 gyrotrons and the corresponding monitoring and protection control system. This paper details the design and implementation of the upgraded control system, which includes central timing control, interlock protection, overcurrent protection, arc protection, RF protection, data acquisition and power acquisition, and real-time power control. The system architecture leverages proven technologies such as PXI, FPGA, and PLC platforms to ensure reliability and real-time performance. The upgraded control system enhances the safety and flexibility of EAST operations, supporting advanced plasma experiments and contributing to fusion research for ITER and future reactors.
Speaker: Dr Weiye Xu (Institute of Plasma Physics, Chinese Academy of Sciences)
-
11:25
-
12:30
→
15:00
Lunch break 2h 30m
-
15:00
→
16:20
Front-End Electronics, Fast Digitizers, Fast Transfer Links & Networks Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
Conveners: Elena Pedreschi (Universita & INFN Pisa (IT)), Marco Francesconi-
15:00
An OS-based Software Architecture for reliable front-end data readout in the Hyper-Kamiokande Experiment 20m
The improvement in sensibility and accuracy of event detection underwater is among the most notable changes projected for the next generation of Cherenkov light neutrino detector Hyper-Kamiokande. To achieve this goal, the front-end electronics have been renovated to perform the analog to digital conversion next to the PMTs, in a vessel underwater. This overhaul has produced the need to have a management system that can provide information about the status of the different elements of the front-end electronics, and configuring the data-taking related parameters remotely, as the vessel will not be physically accessible once the tank is closed. This work describes the software stack built on top of the Data Processing Board (DPB) that serves that purpose. To develop this software, strategies usually followed in reliable applications for automotive and datacenters industries are researched and applied to the DPB. The outcome is a Linux-based software stack running on a heterogenous hardware architecture that performs self-monitoring of the most crucial processes that allows to know the status of the electronics and to take data from the PMTs through high speed multigigabit links. Also, given the increased complexity of the software stack with respect to other embedded solutions, a customized bootup procedure with fallback mechanisms has been developed to ensure that the software remains operational even in the presence of errors due to malfunction over time. This software is running on pre-series units of the DPB and has been tested, giving the desired throughput and fulfilling the required specifications of the front-end.
Speaker: Alejandro Gómez Gambín (Universitat Politècnica de València) -
15:20
Cost-effective and competitive alternative to the ITER Time Communication Network for accurate time synchronization 20m
In large-scale fusion experiments, such as those addressing the major energy challenges of the current century, accurate control and data acquisition systems are crucial for facility operation and the validation of fundamental theories. Diagnostic systems require integration strategies to manage the high data volume, ensure high sampling rates and precisely synchronize control actions. Since control equipment is geographically distributed, temporally accurate triggering is essential to coordinate data acquisitions across the experiment.
This coordination challenge is being addressed at the ITER Neutral Beam Test Facility, which hosts the MITICA experiment. Since MITICA is developing ITER's full-size injector, its control system strictly adheres to ITER CODAC directives. Synchronization is achieved via the Time Communication Network, based on the IEEE1588 v2 protocol. Using National Instruments PXI-6683H devices and proprietary APIs, a full TCN network was successfully implemented at the NBTF, achieving the required synchronization accuracy with an RMS of less than 50 ns
However, the cost of scaling this solution is substantial. The need of a dedicated server, a National Instruments chassis and a PXI-6683H module for each TCN node prompted the exploration of more economical and efficient alternatives.
To this end, we decided to develop a complete and cost-effective solution, based on FPGA technology. Leveraging the knowledge gained with the MITICA TCN and its PTP protocol, we propose a device utilizing a KRIA KR260 board and the Petalinux framework. This board can generate synchronized clock and trigger signals via PTP and is designed for simple and cost-effective integration into existing synchronization infrastructure.Speaker: Luca Trevisan (Consorzio RFX) -
15:40
TRISK-J: a novel integrated front-end with edge computing capabilities for NEXT Experiment 20m
A new CMOS integrated front-end has been developed to use in Time Projection Chamber (TPC) based experiments. The readout of large-scale detectors involves some intrinsic challenges. Nowadays, most of these experiments choose Silicon Photomultipliers (SiPMs) as sensors. However, there is a large variety of sizes and gains, which turns into different parasitic capacitance and dark count rates. Moreover, large detection areas with high coverage are needed, resulting in a high dynamic range of photoelectrons (PE). Additionally, short time integration windows are required in order to obtain better topological information. However, this constraint reduces the charge integrated by each SiPM, thus decreasing signal to dark count ratio. Beyond these issues, the system must be able to process arbitrary shape signals and ensure a single-photon sensitivity for calibration purposes. Finally, the large detector scale implies a high number of channels, which generates a massive amount of data to be managed.
To overcome the presented problems, TRISK-J implements a two path channel architecture that fits NEXT experiment requirements. The ASIC integrates a configurable preamplifier to handle sensor capacitances. This module feeds a slow path for charge integration, and a fast path for data rate optimization. Slow path implements a wide range integrator followed by an asynchronous Analog to Digital converter (ADC). Fast path includes a fully-analog statistical estimator that helps discarding windows without true signal content. Finally, a high speed digital interface is used to send data to the external acquisition system.
Speaker: Vicente Herrero-Bosch (Universidad Politécnica de Valencia) -
16:00
A High-Bandwidth, High-Framerate Integrated Ionizing Particle Detection System 20m
The Advanced Accelerator Diagnostics (AAD) Collaboration has developed a high-bandwidth beam diagnostic prototype designed for multi-GHz intensity and centroid position measurements of ionizing particle beams. The system integrates a thinned diamond sensor with a volume of 1.9 x 1.9 x 0.031 mm3 with a compact, low-inductance signal path coupled to a custom readout ASIC (the FastPulse Precision Sampler (FPS)). This ASIC features a high-bandwidth front-end and a 45-element switched capacitor array capable of sampling at up to 37 Gs/s. Experimental validation at the SLAC Next Linear Collider Test Accelerator (NLCTA) facility demonstrated a detection bandwidth of 4 - 5 GHz, with measured rise times of 70 ps, jitter of 50 ps, and an instrument response function (FWHM) under 125 ps. The system achieved a readout frame rate exceeding 130 kHz and an effective number of bits (ENOB) greater than 10, supported by a low sampling noise of 140 µV-rms. Based on these results, the collaboration is scaling to a four-channel position-sensitive system for 2026 testing. Future iterations aim for sub-picosecond timing resolution through enhanced ASIC features, including on-chip digitization and self-triggering capabilities to simplify deployment in detector and accelerator systems.
Speaker: Carl Grace
-
15:00
-
16:20
→
16:50
Coffee break 30m
-
16:50
→
18:30
Data Acquisition and Trigger Architectures Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
Convener: Stefan Ritt (Paul Scherrer Institut (Switzerland))-
16:50
Optimising demanding I/O applications: the HL-LHC ATLAS Readout case study 20m
The ATLAS detector at the Large Hadron Collider processes 40 million proton-proton collision events per second. The Readout system is a central component of the ATLAS data-acquisition infrastructure, interfacing with the detector electronics. For the HL-LHC, starting in 2030, the Readout system will receive data from more than 15,000 optical links, each operating at 1 MHz, corresponding to an aggregated throughput of 5 TB/s. The system will ingest and de-serialise data using custom FPGA-based PCIe cards (FELIX); process and aggregate data with the Data Handler software; and distribute data via a 400 Gb/s network. In addition, the Readout infrastructure will deliver timing and trigger information, configuration and control commands, and monitoring data.
In the original design, FELIX and the Data Handler were hosted on separate server pools, interconnected by a dedicated network and requiring about 1,000 computing units. Following a requirements review, lessons learned during the ongoing data-taking period, and the recent computing hardware capabilities, the design was updated in 2025 to integrate both components onto a single server platform. The new architecture provides a more reliable, serviceable, and compact system, requiring only about 300 servers with minimal feature loss.
This talk presents the ATLAS HL-LHC Readout architecture, highlighting requirements and challenges. Operational and technological aspects enabling the design update will be discussed, together with the performance of the updated system. Lessons learned in executing demanding I/O workloads on modern computer systems will also be presented, including software optimisations, application layout, and operating system configuration.
Speaker: Carlo Alberto Gottardo (CERN) -
17:10
Wireless TDAQ Upgrades for the PEPS Surface Detector Array 20m
The Probing Extreme PeVatron Sources (PEPS) project aims to measure galactic gamma rays above 1 PeV. It will deploy a water-Cherenkov detector array over 2 km² (Phase 1), to be extended to 10 km² (Phase 2), co-located with the Pierre Auger Observatory. Each station will reuse decommissioned Auger electronics. The legacy communication system—custom 915 MHz “Leeds Radio” for access and a microwave backbone for backhaul—faces spectral congestion and component obsolescence. A key constraint for modernization is to preserve the existing hierarchical topology and timing requirements of the detector electronics.
We compare two 2.4 GHz ISM-band options: a turnkey Ubiquiti airMAX solution with high throughput but prohibitive power draw for solar stations, and a low-power prototype based on SX1280 transceivers driven by ESP32 in Fast Long-Range Communication (FLRC) mode.
In non-line-of-sight indoor tests with strong co-channel Wi-Fi interference, using 2 dBi antennas at both ends, FLRC configurations from 260 kbps to 1 Mbps achieve PER ≈ 1% at RSSI ≈ −80 dBm. We further performed outdoor line-of-sight measurements over distances exceeding 700 m, observing comparable PER performance. Under a 36 dBm EIRP limit, a 2 km free-space link at 2.45 GHz is expected to deliver ≈ −70 dBm at the receiver, corresponding to a >15 dB link margin relative to the measured operating point when using directional receive antennas under LOS conditions.
These preliminary results support the feasibility of kilometer-scale LOS links for burst-mode waveform readout and motivate follow-up outdoor and topology-representative measurements.Speaker: Yifan Yang (Universite Libre de Bruxelles (BE)) -
17:30
Development of a New Data Acquisition System for Subthreshold Pion Production Experiment 20m
We developed a new data acquisition (DAQ) system for subthreshold pion production experiments (SUPER). This system reads self-triggered digitizer signals from 768 CsI(Tl) crystals and a signal from a beam monitor in streaming mode. The SUPER needs both systems to constantly monitor the beam flux and trigger rates. We successfully tested beam commissioning of a SUPER DAQ prototype using a 3 × 3 CsI detector. The amplified pulse shape from the CsI(Tl) detector was recorded using a 62.5 MHz digitizer (V1740D), and the beam trigger signal was sent to a triggerless DAQ board (AMANEQ). By analyzing the timestamp information, we confirmed that the trigger rates of both systems were consistent within 0.5%. We plan to use this new DAQ system for the first experiment (RCNP E610) of the SUPER campaign, which will handle trigger rates of up to 50 kHz. This campaign will continue at further heavy-ion facilities such as RIBF and RAON.
Speaker: Sun-Young Ryu -
17:50
The Central Trigger Processor board for the Advanced SiPM-based camera of the CTA Large-Sized Telescopes 20m
The Cherenkov Telescope Array Observatory (CTAO) is the next-generation ground-based gamma-ray instrument. CTAO’s Large-Sized Telescopes (LSTs) are specifically designed to observe at the lowest energies accessible to CTAO and to capture fast transient events. To enhance their capabilities, an Advanced SiPM-based Camera is currently under development and requires a robust digital trigger system to cope with the high number of background events observed at the lowest energies. This contribution presents the status of the Central Trigger Processor Board (CTPB), the subsystem responsible for performing both the global camera trigger and the inter-telescope trigger.
We present the final CTPB architecture and the evaluation of several technological approaches through dedicated test benches. Specifically, we explore the potential implementation of custom Machine Learning algorithms on FPGAs, such as Convolutional Neural Networks (CNNs) and clustering techniques, for efficient event discrimination. To achieve high-performance hardware acceleration, we are investigating the deployment of these custom CNN architectures using the AMD Vitis environment. Furthermore, we present preliminary tests on high-speed data transmission, discuss the assessment of different communication protocols, and realistically benchmark the different proposed algorithms. These studies guide the design and fabrication of a future prototype to ensure the CTPB fulfils the LST Advanced Camera's demanding requirements.
Speaker: María Molina Delicado (IPARCOS - Universidad Complutense de Madrid) -
18:10
A Congestion-Aware RDMA Front-End for CTAO-LST Advanced Camera Readout 20m
Next-generation telescopes require front-end electronics able to sustain multi-gigabit data streams while preserving deterministic timing and reliable event delivery. A new evolution of the FADC (Fast Analog-to-Digital Converter) board, a digitizer board originally developed for the CTAO-LST Advanced Camera, is presented, extending it into a fully self-contained, network-attached front-end module.
The new board integrates two FPGAs: a high-performance data FPGA responsible for trigger, buffering and data acquisition, and a service FPGA dedicated to timing distribution, White Rabbit synchronization and slow control. High-rate waveform data are streamed directly from the front end to DAQ servers using a 10 GbE RoCEv2 RDMA engine written in Bluespec SystemVerilog and released as open-source, enabling zero-copy, low-latency transfers without CPU intervention.
To guarantee lossless and deterministic operation in a funneling network topology, a hardware implementation of DCQCN (Data Center Quantized Congestion Notification) is being developed and tightly coupled to the RoCEv2 engine. DCQCN dynamically regulates the RDMA injection rate by delaying packet emission toward the UDP/MAC layer in response to ECN feedback from the Ethernet fabric.
The algorithm has been optimized using an ns-3 simulation environment that reproduces realistic experiment network conditions, allowing extraction of stable and high-throughput control parameters prior to hardware deployment. The resulting firmware architecture provides an end-to-end congestion-aware RDMA data path directly at the detector front end.
This approach enables scalable, trigger-capable, and network-native readout electronics well suited for large-channel-count experiments and future real-time distributed DAQ systems.Speaker: Filippo Marini (INFN - National Institute for Nuclear Physics)
-
16:50
-
08:30
→
09:10
-
-
08:30
→
09:10
Invited talk Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
Convener: Prof. Audrey Corbeil Therrien (Université de Sherbrooke)-
08:30
Real time monitoring of Virgo superattenuators 40mSpeaker: Alberto Gennai
-
08:30
-
09:10
→
09:50
Emerging Technologies, New Standards, Feedback on Experience & Industry Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
Convener: Martin Grossmann PSI (Paul Scherrer Institut)-
09:10
A System-Agnostic Method for Ultra-Fine Clock Phase Shifting 20m
Achieving sub-picosecond clock phase alignment is a challenge in timing distribution systems. A common strategy is to implement a control system that measures the feedback phase, and shifts the distributed clock to compensate for variations. At sub-picosecond level, deterministic phase-shifting becomes a major bottleneck as the cost and complexity of solutions increase significantly with the required precision.
This paper presents a novel technique, called System-Agnostic Method for Biphasic Alignment (SAMBA), that improves the phase-shifting resolution of digitally controlled PLLs by orders of magnitude. It can be implemented using COTS PLLs as well as PLLs integrated from programmable devices such as FPGAs.
SAMBA works by cascading two PLLs and tuning the frequency of their oscillator in order to cause the phase-shift resolution of each PLL to be slightly different. This small difference can be exploited by using the Vernier Effect to produce ultra-fine phase steps: by shifting the phase of both PLLs in opposite directions, the net phase shift will be the difference between the resolution of both PLLs. With a linear combination of shifts, the resolution can be further improved to the greatest common divisor of their individual resolution.
Results show that SAMBA can provide phase-shift steps as low as 100 fs, which is the limit of our measurement. The mathematical model, however, predicts that it can be arbitrarily low. Indirect measurements by averaging multiple steps demonstrated that an average resolution as low as 1 fs can be achieved.Speaker: Mauricio Feo (CBPF - Brazilian Center for Physics Research (BR)) -
09:30
Real-Time FPGA-Based SiPM Detector Emulation using Temporally Quantized Model 20m
We present a real-time hardware implementation of a versatile detector emulator capable of reproducing realistic Silicon Photomultiplier (SiPM) signals. Our approach builds upon the open-source SimSiPM framework originally developed to simulate the microscopic response of SiPMs, including photon detection efficiency, optical crosstalk, afterpulsing, and dark counts. SimSiPM provides idealized photon-level data with arbitrary temporal and amplitude resolution. In contrast, our FPGA-based emulator translates this fine-grained simulation into physically realizable analog signals, maintaining real-time operation and finite hardware resolution.
The system receives simulated photon events either via a 10 GbE UDP stream or directly from the Zynq Processing System (PS), and performs on-FPGA temporal quantization, dividing time into bins equal to one clock cycle. All photon hits within a bin are accumulated, and their contribution is combined through a weighted temporal averaging scheme that preserves sub-bin precision. Signal shaping is executed entirely in hardware, using a digital low-pass filter to emulate the finite rise time and a digital RC network for exponential decay synthesis. The resulting waveform is converted to analog through dual 16-bit LTC2000 DACs operating at 2.5 GHz.
This architecture enables the generation of physically accurate detector signals in real time, rather than precomputed waveforms. It also generalizes beyond SiPMs, providing a flexible framework for hardware-in-the-loop testing of front-end electronics. Moreover, when coupled with Geant4 simulations, the system can directly transform particle-level interactions into analog detector outputs. The proposed implementation demonstrates high throughput, low latency, and minimal CPU overhead, achieving real-time emulation of detector behavior on FPGA.Speaker: Stefano Carsi (Nuclear Instruments, Lambrugo (Co), Italy)
-
09:10
-
09:50
→
10:20
Coffee break 30m
-
10:20
→
11:05
Mini Orals Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
-
10:20
27 FPGA-Deployed Variational Autoencoder for Real-Time Soft X-Ray Electron Temperature Reconstruction in RFX-mod2 20m
RFX-mod2 is the upgraded version of the RFX-mod Reversed-Field Pinch (RFP) device, operated at Consorzio RFX. Given the substantial phenomenological and engineering complexity of the device, control performance is expected to benefit significantly from the integration of a broader set of diagnostics, such as Soft X-Ray (SXR) measurements, that help characterize the internal plasma state. A crucial parameter for accurately assessing this state is the electron temperature, inferred from SXR radiation. However, to effectively integrate this information in real time, it is necessary to derive a denoised, low-dimensional representation of the acquired signal within a timeframe compatible with the control cycle.
Within this context, autoencoder (AE) architectures and their variational extension (VAE) can be employed to capture the strong non-linearities characteristic of plasma behavior. The proposed VAE model, initially developed in earlier work at the RFX Consortium and subsequently extended, has been evaluated on two datasets: one synthetic and one experimental, the latter derived from the SXR diagnostic of RFX-mod. To achieve real-time deployment on FPGA hardware, the model is subjected to a quantization procedure using either the CERN/HLS4ML framework or the Xilinx/FINN framework. The system demonstrates the ability to recover relevant physical information in real time, suitable for integration into plasma discharge control along-side other diagnostic systems. Results show that the VAE can consistently reconstruct the input temperature profile, even under moderate to high levels of data loss, highlighting the potential of deep generative models for robust data imputation in fusion plasma diagnostics.Speaker: Luca Orlandi (Università di Padova - Consorzio RFX) -
10:25
119 Benchmarking Neural Network Inference on Versal ACAP AI Engines for Real-Time Detector Alignment 20m
Accurate alignment of detector elements in real-time is essential to maintain the integrity of reconstructed particle trajectories, especially in high-rate environments like the ATLAS experiment at the Large Hadron Collider (LHC). Any misalignment in the detector geometry can introduce systematic biases and potentially affect the accuracy of precision physics measurements. Current calibration systems that correct for these effects require substantial computational resources and these methods often lead to high operational costs and are often unable to handle rapidly changing conditions, leading to systematic inaccuracies and potential biases in physics measurements.
To overcome these challenges, we propose a calibration system that employs a lightweight neural network to predict the misalignment of the detectors in real time. We present a neural network model with hierarchical subset solvers optimized for Versal ACAP that predicts detector misalignment based on the detector’s current geometry and statistical characteristics of reconstructed particle tracks.
We address this by implementing the system on a heterogeneous architecture, partitioning sequential tasks to CPUs while offloading the computationally intensive matrix multiplications to the AI Engines to exploit their specialized vector processing capabilities. We benchmark this implementation against FPGA and GPU to evaluate trade-offs in latency and power across different data types. This work is implemented as a proof of concept for a modern beam telescope to demonstrate the viability of performing inference on the edge at low cost per watt- a crucial requirement for future high-energy physics experiments.
Speaker: Akshay Malige (Brookhaven National Laboratory (US)) -
10:25
218 Single-Detector Cosmic-Ray Muon Identification in Plastic Scintillators Using Machine Learning 20m
Cosmic rays at ground level are dominated by high-energy muons and are traditionally identified using coincidence techniques that require multiple detectors. In this work, a machine-learning-based approach is proposed for single-detector cosmic-ray muon identification using a plastic scintillation detector. Waveform data were acquired from gamma-ray events obtained using standard gamma-emitting sources and from cosmic-ray muons identified through a conventional coincidence setup for labeling purposes. The signals were digitized using a fast waveform digitizer and directly used as inputs to a machine learning model.
Machine learning was employed to discriminate cosmic-ray muons from gamma background based on waveform characteristics. When applied to background radiation measurements, the trained model successfully extracted the cosmic-ray muon component and reproduced the characteristic muon energy deposition peak in the plastic scintillator, consistent with expectations for minimum ionizing particles. The machine-learning-based results show good agreement with those obtained using traditional coincidence techniques. These results demonstrate that machine learning enables reliable cosmic-ray muon identification using a single plastic scintillation detector, offering a simplified and cost-effective alternative to hardware-intensive coincidence systems for radiation monitoring and cosmic-ray studies.Speaker: Hai Vo -
10:25
219 Self-Supervised and Semi-Supervised Machine Learning for Waveform-Based Neutron/Gamma Discrimination in EJ-276 Plastic Scintillator 20m
Accurate neutron/gamma pulse shape discrimination (PSD) in plastic scintillators is strongly limited at low energies and further complicated by the scarcity and uncertainty of labeled training data, particularly for mixed neutron–gamma sources such as Cf-252. Conventional supervised deep learning approaches rely heavily on clean labels, which are often difficult to obtain in practical experimental conditions.
In this work, we propose a waveform-based neutron/gamma discrimination framework based on self-supervised and semi-supervised learning to reduce label dependence and improve robustness. A self-supervised pretraining stage is first employed to learn latent representations from large volumes of unlabeled scintillation waveforms using contrastive learning. The pretrained model is then fine-tuned with a limited set of labeled neutron and gamma events acquired from Cf-252 and Co-60 sources. Experimental results using an EJ-276 plastic scintillation detector show that the proposed approach significantly improves discrimination performance in the low-energy region below 150 keVee, achieving higher Figures of Merit compared to fully supervised models trained on the same labeled datasetSpeaker: Hai Vo -
10:25
52 Design-Space Exploration and Integer Quantization of Graph Neural Networks for Real-Time FPGA Track Finding 20m
Real-time track finding for displaced-muon signatures in the CMS Level-1 trigger must operate under strict fixed-latency constraints (12.5~$\mu$s) while processing high-throughput detector data. Graph neural networks (GNNs) provide a natural representation of sparse, irregular detector geometries; however, mapping message-passing models to FPGAs requires careful co-optimization of numerical formats, architectural parameters, and high-level synthesis (HLS) microarchitecture.
We present an end-to-end workflow bridging GNN training and FPGA prototyping for a GraphSAGE-based model targeting real-time inference. The pipeline integrates: (i) automated design-space exploration across model dimensions, fixed-point precision, and HLS parameters to expose accuracy--latency--resource trade-offs; (ii) an integer-only INT8 implementation with data-driven bit-width optimization, reducing accumulator and scaling widths while preserving numerical correctness; and (iii) modular C++ kernels synthesized with Vitis HLS and validated through bit-exact C-simulation against Python integer references.
Preliminary validation on the Cora benchmark demonstrates that post-training quantization preserves model accuracy within 0.1\% of the floating-point baseline, while enabling substantial reductions in memory footprint and arithmetic complexity. Bit-exact agreement between software and hardware models is achieved using optimized fixed-point scaling. Quantization-aware training and physics-driven datasets for displaced-muon reconstruction are currently under development.
This work establishes a reproducible methodology for deploying message-passing GNNs on FPGAs under strict real-time constraints, providing a concrete path toward fixed-latency GNN-based track reconstruction in the CMS trigger system.
Speaker: Mr Pelayo Leguina (Universidad de Oviedo (ES)) -
10:25
75 An Initial Study of Neural-Network-Assisted Approaches for Drift Chamber Tracking 20m
Drift chambers are widely used for charged-particle tracking in nuclear and high-energy physics experiments. Track reconstruction commonly involves combinatorial hit association followed by fitting. In experiments, additional noise enlarges the search space, leading to increased and unstable tracking runtime, a problem for (near-) real-time analysis. Motivated by this behavior, neural-network-assisted hit filtering is investigated to suppress noise and limit combinatorial effects in drift chamber tracking. Hits are represented as nodes in an event-wise graph and classified as true or noise using a graph-based neural network prior to track fitting. Using simulated data consistent with experimental hit multiplicities, the proposed filter achieves high classification accuracy and reduces noise by approximately a factor of 100, limiting the number of hit combinations passed to the track fitting stage. These results suggest that neural-network-assisted hit filtering may serve as a useful pre-processing step for controlling tracking runtime. As neural-network inference introduces computational overhead, the overall runtime impact depends on the noise level and the balance between inference cost and noise reduction.
Speaker: Mr Viet Nguyen (RIKEN Nishina Center) -
10:25
99 Real-time Autoregressive Evolution of Tokamak Plasma Magnetic Measurements 20m
High-precision and rapid evolution prediction of magnetic measurement signals are essential for tokamak plasma configuration control and the safe operation of discharges. Traditional physics-based models typically suffer from high computational complexity and long execution times, making it difficult to satisfy real-time requirements. Meanwhile, existing data-driven methods often encounter issues with insufficient reliability when performing long-term extrapolation. To address these challenges, this paper proposes a real-time autoregressive magnetic measurement evolution method based on the Transformer architecture. The model takes historical magnetic measurement data and real-time control commands as inputs, leveraging the self-attention mechanism of the Transformer to capture long-range temporal dependencies. At each time step, the model predicts the magnetic measurements for the next step and recursively updates these predictions as subsequent inputs, thereby achieving continuous temporal evolution. Experimental results demonstrate that the proposed single-step autoregressive approach accurately tracks the evolution trajectory of magnetic signals during the discharge process, achieving an accuracy of no less than 92% within a 1000 ms duration. Furthermore, its single-step inference structure is well-suited for low-latency hardware deployment, fulfilling the rigorous demands of real-time control systems. This research provides a high-precision, high-efficiency digital environment for closed-loop control simulators and the development of advanced controllers based on reinforcement learning for magnetic confinement fusion devices.
Speaker: Dr Shuai Li (Institute of Energy, Hefei Comprehensive National Science Center(Anhui Energy Laboratory)) -
10:45
100 Temperature-Induced Delay Drift in Xilinx FPGA Multi-gigabit Transceivers 20m
In large-scale physical experiments, the multigigabit transceivers (MGTs) in field-programmable gate arrays (FPGAs) have become a prevalent choice to implement high-precision clock distribution and synchronization. However, the temperature-induced delay drift in MGTs and other electronic components has not been well investigated, posing a significant challenge to synchronization stability at picosecond levels. This paper first characterizes the temperature coefficients of MGTs in Xilinx Kintex UltraScale+ FPGA and then implements temperature compensation in a clock distribution and synchronization system to demonstrate performance enhancement. Utilizing an on-chip, temperature-robust Dual-Mixer Time Difference (DDMTD) method, the temperature coefficients in the transmitter (TX) and receiver (RX) are measured as 1.42 ps/°C and 0.59 ps/°C respectively. After applying compensation, within a temperature range from 35°C to 80°C, the maximum drift of the clock distribution is reduced from 90.7 ps to 7.6 ps. The significant temperature effect in the TX is theoretically analyzed at the end of this paper.
Speaker: Yonggang Wang (University of Science and Technology of China) -
10:45
114 Deep Learning–Based Real-Time Error Detection in Radiotherapy with EPID Images 20m
Accurate in-vivo dose monitoring is increasingly important in radiotherapy to comply with stringent dose delivery guidelines. Electronic Portal Imaging Devices (EPIDs), routinely used for patient positioning, offer the potential for real-time treatment monitoring. However, EPID images are only indirectly related to patient dose, and their use for error detection is challenging. In this study, an in-house developed DL model was used to transform raw EPID images into water equivalent portal dose images. Particularly, we assessed the feasibility of using these DL-generated portal dose images for application in a real time error detection alert system in phantom experiments, performed at the Careggi university hospital (Firenze, Italy).
Different phantoms were irradiated under reference conditions and with intentionally introduced treatment errors, including deviations in monitoring units (MU) and phantom shifts. EPID images were converted to portal dose images using the DL model, and several comparison metrics were evaluated, including gamma-index analysis and relative mean absolute dose difference (reMADD). The results demonstrate that MU errors can be reliably detected using a combination of metrics. Gamma passing rates decreased with increasing MU errors, while reMADD provided consistent sensitivity across different phantoms and field sizes. Setup errors were more challenging to detect, particularly for narrow fields, with reMADD showing slightly better sensitivity than gamma analysis.
Overall, the proposed DL-based portal dose comparison framework is appropriate for application in a real time alert system for detection of treatment errors. Future work will focus on extending the approach to clinical patient data.
Speaker: Emmanuel Uwitonze (National Institute for Nuclear Physics, Section of Pisa, Pisa, Italy) -
10:45
144 Realization of Large Model-Driven Tango Control and Data Interaction Analysis Tool 20m
A highly flexible experimental system, integrating custom control/data acquisition hardware and supporting software, is developed to address the dynamic and diverse demands of experimental tasks. Built on Tango control architecture with multi-protocol support (serial, TCP/IP, MQTT), it enables unified management of various devices and rapid construction, debugging, and deployment of experimental setups. The system incorporates an innovative LLM-driven interactive tool that automates code generation/execution, analyzes debugging information and experimental data as per user instructions, and realizes intelligent fault diagnosis and closed-loop repair. Successfully applied in marine observation and preliminary Hunt experiments, it has proven efficient in debugging self-developed sensor modules, real-time data acquisition, and seamless multi-device integration. This practical application fully validates the system’s high efficiency, reliability, and strong adaptability in complex engineering scenarios.
Speaker: Xiaochuan Xie (IHEP) -
10:45
146 A Virtual Gamma-ray Spectrum Database for Training in Seawater Radioactivity Analysis 20m
Since the Fukushima Daiichi Nuclear Power Plant accident, interest in marine radioactivity monitoring has increased significantly, increasing demand for trained personnel capable of reliable gamma spectrometric analysis. However, education and training in seawater radioactivity measurement remain challenging, as low-level radionuclide analysis typically requires expensive high-purity germanium (HPGe) detectors and radiochemical pre-concentration processes such as ammonium phosphomolybdate (AMP) coprecipitation.
In this study, an education-oriented virtual marine radioactivity spectrum database is developed to support training without reliance on actual measurement infrastructure. Background gamma-ray spectra were derived from experimentally measured seawater samples, while artificial radionuclide spectra were independently calculated using Monte Carlo simulations. These two components were combined to construct a library of virtual training spectra, allowing systematic variation of radionuclide type and activity concentration. This approach enables generation of diverse and realistic training datasets representing a wide range of artificial radionuclide scenarios in marine environments.
To ensure applicability to practical training scenarios, representative seawater sample geometries and detector configurations were defined, and virtual spectra were generated for varying activity levels and counting conditions. The generated spectra were validated through energy calibration and spectral consistency checks against experimental measurements, demonstrating agreement within acceptable limits for educational use.
The resulting database enables trainees to practice spectrum interpretation, peak identification, and basic quantitative analysis in a realistic yet controlled environment. This approach is intended for capacity-building programs in Pacific islands where marine monitoring needs are increasing but radiological infrastructure is limited. The proposed framework provides a scalable and cost-effective solution for marine radioactivity education.
Speaker: Wanook Ji (Korea Atomic Energy Research Institute) -
10:45
155 Ion Irradiation facility with Performance Online Monitoring for High Temperature Superconducting Tape 20m
A ultra-compact multi-particle superconducting cyclotron is currently under construction at the China Institute of Atomic Energy (CIAE). It is designed for the irradiation modification of high-temperature superconducting (HTS) tapes and medical isotope 211At production. An ion irradiation facility with online monitoring of temperature and critical current performance for HTS tapes have been designed. The facility comprises the following systems: 1) Beam transport system: This includes multiple sets of quadrupole magnets and an octupole magnet for beam shaping and modulating to get a big irradiation space with high uniformity. These efforts have culminated in a large-area, uniform beam spot (not smaller than 2×2 cm) at the irradiation terminal, with a uniformity better than 90%. 2) Ion irradiation terminal: This terminal features a high-precision roll-to-roll transport system, a high-vacuum system, an efficient cooling system, and a in house developed online monitoring system for temperature and critical current performance. It enables real-time monitoring of temperature changes during the continuous ion irradiation process, as well as real-time, non-destructive measurement of the critical current of HTS tapes. This capability facilitates dynamic research into the relationship between irradiation-induced damage and the degradation of HTS critical current performance. This facility combines advanced accelerator beam technology, high efficiency cooling system, high vacuum technology, and real-time monitoring and diagnostic techniques, providing a stable, efficient, and fully functional comprehensive experimental device for deeply revealing the mechanism of ion irradiation on the micro-defects and critical current performance of HTS tapes. This facility will be presented in detail in the paper.
Speaker: xiaofeng zhu -
10:45
167 Physics-Aware and Hardware-Efficient Federated Learning for Isotope Identification in Distributed Edge Radiation Networks 20m
Real-time and accurate isotope identification is critical for nuclear safety. Traditional methods rely on continuously transmitting raw spectral data from edge nodes to a centralized server, imposing severe bandwidth constraints and privacy vulnerabilities. While Federated Learning (FL) offers a decentralized alternative for this scenario, its deployment in radiation networks is hindered by feature misalignment caused by spectral drift and latency constraints on edge devices. To address these challenges, we propose a Physics-Aware Hardware-Efficient Federated Learning (PAHE-FL) framework. First, we embed a physics-aware preprocessing module that utilizes the ubiquitous K-40 background signature for unsupervised gain stabilization, ensuring spectral consistency across edge devices. Second, to tackle non-IID environmental interference, we design a decoupled training strategy where the feature extractor is aggregated globally to learn universal characteristics, while classifier heads are updated locally to adapt to specific backgrounds. Finally, the model is optimized via post-training quantization to enable deployment on resource-constrained microcontrollers. Experimental results on STM32 devices demonstrate that PAHE-FL achieves millisecond-level latency and superior robustness, significantly outperforming standard approaches in dynamic environments.
Speaker: Ms Yuxin Wu (Tianjin University of Technology) -
10:45
210 Application of the FPGA Coprocessor TM FAST for Siemens S7-1500 PLCs in Neutron Instruments 20m
The Siemens FPGA Coprocessor TM FAST for Siemens S7-1500 PLCs supports processing times in the order 50ns, which is several orders of magnitude below the PLC cycle time, as well as count rates in the order of a few MHz.
Since TM FAST supports isochronous mode, the TM fast processing cycle can be synchronized with PLC tasks as well with the PROFINET of PROFIBUS decentral devices. This allows the usage of TM FAST in motion applications, e.g. for the readout of encoders with protocols not support by standard Siemens PLC modules or for the synchronization between detectors and motion axes.
Another typical application is the readout of simple detectors, e.g. neutron monitors or Helium-3 proportional counters. Due to the lower limit of PLC cycle times and the limited IO size of TM FAST, it has only limited capabilities for the readout of time resolved detectors which requires careful consideration.Speaker: Harald Kleines -
10:45
214 A common framework for real-time structured data communication in fusion devices 20m
This work presents the development of a unified software environment that allows real-time communication of structured data in the control system of fusion devices. The main aims are: enabling the seamless integration of MATLAB/Simulink(R) control algorithms, which often rely on structured data for signal I/O, into machine-specific control system software; and to do so by developing a single software framework for multiple fusion devices. The project updates three key open-source frameworks—SCDDS for control algorithm analysis and design, MARTe2 for real-time execution, and MDSplus for data management—ensuring their interoperability and broad applicability. Central to the effort is the creation of robust data and signal interfaces that allows structured data interaction among SCDDS, MARTe2, MDSplus, and MATLAB/Simulink(R). Structure and structure array support has been added to MDSplus: MDsplus is now capable of storing structured data in single nodes, in the form of Dictionaries and List of Dictionaries; SCDDS is now capable of correctly reading and writing structures to and from MDSplus Dictionaries and List of Dictionaries; the MARTe2-Simulink interface algorithm has been completely revised and updated to integrate support for structure arrays, both as signals and as parameters. Moreover, an interface has been developed to allow loading structured parameters from MDSplus to MARTe2. The resulting software is designed to be compatible with major experimental platforms, including TCV, DTT, and RFX-mod2, as well as other devices employing MARTe2 as their real-time framework. Furthermore, the architecture anticipates future extension to systems using different frameworks, thereby enhancing scalability and long-term relevance.
Speaker: Dr Nicolo Ferron (Consorzio RFX) -
10:45
215 Fueling modeling and control for ITER start of research operation 20m
ITER's Start of Research Operation (SRO) targets aims for plasmas reaching ~7.5 MA for durations exceeding 100 seconds. Effective control of fueling, density, impurity dosing, edge-localized modes (ELMs), and H-mode transitions is critical. To support model-based controller design, the Gas Injection System (GIS) and Pellet Injection System (PIS) have been modeled such that a complete fueling control system can be developed and assessed to address the complex requirements and challenges unique to ITER. These challenges involve lag-time due to lengthy gas lines, fueling efficiency decay at high plasma temperatures and densities, synchronization of multiple actuators, and balancing ELM pacing with fueling needs using an advanced Actuator Manager (AM).
The GIS and PIS models utilize 1-D particle transport models for the gas flow and diffusion through the pipe and the pellet transport through the Flight Tubes (FTs), which have been implemented within the Plasma Control System Simulation Platform (PCSSP). Furthermore, the particle deposition into the plasma and the plasma-neutral interactions are modeled through the RApid Plasma DENsity Simulator (RAPDENS). The density and neutral pressure are also monitored and controlled with respect to the various density and pressure limits at all phases of the plasma, from prefill to termination. The results presented here demonstrate the complete feedback control and integration of the GIS, PIS, AM, RAPDENS, and limit monitoring functions for modeling the smooth transitions between fueling modes and effective handling of actuator failures. This poster presents the architectural design, simulation results, and future strategies for optimizing fueling control on ITER.Speaker: David Weldon (CEA, Cadarache) -
10:45
96 A Low-Latency Distributed Machine-Protection System for the PIP-II Linear Accelerator 20m
PIP-II at Fermilab features a high-intensity 800 MeV superconducting linear accelerator (linac) requiring a robust Machine Protection System (MPS) to prevent beam-induced damage to delicate cryomodules and vacuum components. A critical element of this system is the fast Analog Machine Protection System, a high-bandwidth platform designed for real-time beam loss monitoring. The system utilizes a modular, FPGA-based architecture to digitize signals from beam-sensing devices, such as AC Current Transformers (ACCTs), non-invasive Ring Pickups (RPUs) and beam scrapers.
To ensure galvanic isolation and noise immunity, digitized data is transmitted via fiber-optic links to central processing units that perform differential current analysis across the linac. The system is engineered for ultra-low latency, achieving a total response time of less than 10 microseconds from fault detection to beam inhibition. This paper presents the hardware selection and signal conditioning strategies for the MPS remote digitization nodes for the PIP-II linear accelerator.
Speaker: Dr Jonathan Daniel Eisch (Fermi National Accelerator Lab. (US))
-
10:20
-
11:05
→
12:05
Poster session: AI, Machine Learning, Real Time Simulation, Intelligent Signal Processing
-
11:05
FPGA-Deployed Variational Autoencoder for Real-Time Soft X-Ray Electron Temperature Reconstruction in RFX-mod2 20m
RFX-mod2 is the upgraded version of the RFX-mod Reversed-Field Pinch (RFP) device, operated at Consorzio RFX. Given the substantial phenomenological and engineering complexity of the device, control performance is expected to benefit significantly from the integration of a broader set of diagnostics, such as Soft X-Ray (SXR) measurements, that help characterize the internal plasma state. A crucial parameter for accurately assessing this state is the electron temperature, inferred from SXR radiation. However, to effectively integrate this information in real time, it is necessary to derive a denoised, low-dimensional representation of the acquired signal within a timeframe compatible with the control cycle.
Within this context, autoencoder (AE) architectures and their variational extension (VAE) can be employed to capture the strong non-linearities characteristic of plasma behavior. The proposed VAE model, initially developed in earlier work at the RFX Consortium and subsequently extended, has been evaluated on two datasets: one synthetic and one experimental, the latter derived from the SXR diagnostic of RFX-mod. To achieve real-time deployment on FPGA hardware, the model is subjected to a quantization procedure using either the CERN/HLS4ML framework or the Xilinx/FINN framework. The system demonstrates the ability to recover relevant physical information in real time, suitable for integration into plasma discharge control along-side other diagnostic systems. Results show that the VAE can consistently reconstruct the input temperature profile, even under moderate to high levels of data loss, highlighting the potential of deep generative models for robust data imputation in fusion plasma diagnostics.Speaker: Luca Orlandi (Università di Padova - Consorzio RFX) -
11:25
Single-Detector Cosmic-Ray Muon Identification in Plastic Scintillators Using Machine Learning 20m
Cosmic rays at ground level are dominated by high-energy muons and are traditionally identified using coincidence techniques that require multiple detectors. In this work, a machine-learning-based approach is proposed for single-detector cosmic-ray muon identification using a plastic scintillation detector. Waveform data were acquired from gamma-ray events obtained using standard gamma-emitting sources and from cosmic-ray muons identified through a conventional coincidence setup for labeling purposes. The signals were digitized using a fast waveform digitizer and directly used as inputs to a machine learning model.
Machine learning was employed to discriminate cosmic-ray muons from gamma background based on waveform characteristics. When applied to background radiation measurements, the trained model successfully extracted the cosmic-ray muon component and reproduced the characteristic muon energy deposition peak in the plastic scintillator, consistent with expectations for minimum ionizing particles. The machine-learning-based results show good agreement with those obtained using traditional coincidence techniques. These results demonstrate that machine learning enables reliable cosmic-ray muon identification using a single plastic scintillation detector, offering a simplified and cost-effective alternative to hardware-intensive coincidence systems for radiation monitoring and cosmic-ray studies.Speaker: Hai Vo -
11:45
A 1D-Convolutional Autoencoder for Pulse Compression in Nuclear DAQ Systems 20m
High-speed digital Data Acquisition (DAQ) systems in modern nuclear physics face a common bottleneck: the massive data throughput generated by digitizing full waveforms at high sampling rates. This limitation restricts the effective counting rate and increases the beam time required to achieve statistical significance.
This work proposes a generic, deep learning-based pulse compression framework designed for scintillation detectors. The approach utilizes a 1D Convolutional Autoencoder (1D-CAE) with an asymmetrical architecture optimized for Edge Computing. While this study presents the validation of the model architecture, the ultimate objective is the hardware implementation of the Encoder directly into the front-end electronics (e.g., FPGAs) for real-time operation.
To validate this concept, the method was tested using data from the Neutron Detector Array (NEDA). Results demonstrate that the proposed architecture successfully compresses pulses into a lower-dimensional latent space while preserving the critical morphological features required for Pulse Shape Discrimination (PSD). This validation provides the necessary proof-of-concept to proceed with the firmware deployment, aiming to significantly reduce data bandwidth without compromising the intrinsic detection efficiency.
Speaker: Jose Manuel Deltoro Berrio -
11:45
A High-performance DOI Encoding Algorithm for PET Detectors with High Coupling Ratio 20m
For the single-ended readout positron emission tomography (PET) detectors, the simplest method for determining depth-of-interaction (DOI) is based on the ratio of the photodetector signal of the corresponding crystal to the overall detector signal by placing a light guide on top of the detector. However, as the coupling ratio between the crystal and the photodetector increases in pursuit of higher resolution, the effectiveness of this method gradually deteriorates. This paper proposes a new DOI encoding algorithm by exploring the ratio of the combination of the top three signals to the overall detector signal, where the weighting coefficients for the signal combination are optimized through mathematical derivation. The proposed new method was applied to our single-ended row-column summation readout PET detector with 9:1 coupling ratio. The experimental test results show that the new algorithm enables uniform DOI resolution across crystals, achieving an average DOI resolution of 4.07 mm Full Width at Half Maximum (FWHM) with 12.1% improvement over the traditional method. Since the new algorithm can largely improve the DOI determination without any hardware changes, it retains the advantages of simplicity and cost-effectiveness of the single-ended row-column summation readout PET detectors, even for pursuing high position resolution with high coupling ratios.
Speaker: Yonggang Wang (University of Science and Technology of China) -
11:45
A Maximum Log-Likelihood Regression Approach for Quantitative Mixture Prediction in PGNAA Spectroscopy 20m
Scrap recycling is a vital source of sustainable raw materials, yet real-time analysis of heterogeneous metal flows remains a significant challenge. While Prompt Gamma Neutron Activation Analysis (PGNAA) offers a non-destructive method for elemental analysis, traditional categorical classification models are limited by their inability to resolve intermediate material compositions. In this work, we present a novel regression-based approach for PGNAA spectroscopy that enables, for the first time, the quantitative determination of arbitrary mixture ratios in metal alloys.
Our framework utilizes a probability-distribution-based sampling method to generate synthetic datasets from reference long-term spectra. These mixtures are modeled as linear combinations of reference alloys, defined by a single mixing parameter λ. We employ a Maximum Log-Likelihood method to estimate λ from noisy, short-term measurements.
The results demonstrate high prediction accuracy despite the inherent statistical noise of rapid acquisition times. At a measurement time of only 1 s, 98% of the predictions for Aluminium-copper mixtures deviate by less than 2% from the true fraction. Furthermore, the approach proves robust across various alloy combinations, maintaining a median prediction close to the preset fraction. This work represents an important step toward precise, non-destructive online analysis of heterogeneous metal flows and provides a technical foundation for future real-time monitoring of alloy compositions.
Speaker: Helmand Shayan -
11:45
A Scalable Intelligent Agent System for Automated Monitoring and Debugging Support of the JUNO Electronics system 20m
Following the start of stable data taking at the Jiangmen Underground Neutrino Observatory (JUNO), reliable link monitoring is required beyond standard data-quality checks. In particular, the Back-End Card subsystem is a key component of trigger links, and a recurring operational challenge is rapid and accurate root-cause localization when a link drops across multiple hardware and software layers. While JUNO has accumulated many Python diagnostics to inspect counters, logs, and topology-dependent symptoms, conventional diagnostic procedures are unscalable and fragmented across scripts.
This study develops an AI agent architecture that makes Large Language Model (LLM) based assistance usable and maintainable in such complex scientific software ecosystems by addressing three core problems: reliable tool execution, reliable long-task solving, and scalable onboarding of new tools. The proposed two-mode framework comprises an Analysis Agent that invokes tools through Skill Cards and maintains long-term task coherence via a Folded Memory mechanism. Skill Cards are structured, type-safe contracts capturing argument schemas, usage policies, and safety constraints. Second, to resolve the bottleneck of manual tool integration, a Learning Agent that automatically converts raw Python functions into Skill Cards and iteratively validates them to prevent schema drift, runtime failures, and tool distraction in a growing registry.
The resulting workflow orchestrates existing diagnostics with engineering-grade determinism while preserving LLM flexibility in procedure selection. Initial evaluations indicate improved consistency, stronger traceability, and better success rate compared to standard AI Agents while keeping the diagnostic library modular and maintainable as it evolves.
Speaker: nizar mahri (IIHE) -
11:45
An Initial Study of Neural-Network-Assisted Approaches for Drift Chamber Tracking 20m
Drift chambers are widely used for charged-particle tracking in nuclear and high-energy physics experiments. Track reconstruction commonly involves combinatorial hit association followed by fitting. In experiments, additional noise enlarges the search space, leading to increased and unstable tracking runtime, a problem for (near-) real-time analysis. Motivated by this behavior, neural-network-assisted hit filtering is investigated to suppress noise and limit combinatorial effects in drift chamber tracking. Hits are represented as nodes in an event-wise graph and classified as true or noise using a graph-based neural network prior to track fitting. Using simulated data consistent with experimental hit multiplicities, the proposed filter achieves high classification accuracy and reduces noise by approximately a factor of 100, limiting the number of hit combinations passed to the track fitting stage. These results suggest that neural-network-assisted hit filtering may serve as a useful pre-processing step for controlling tracking runtime. As neural-network inference introduces computational overhead, the overall runtime impact depends on the noise level and the balance between inference cost and noise reduction.
Speaker: Mr Viet Nguyen (RIKEN Nishina Center) -
11:45
Benchmarking Neural Network Inference on Versal ACAP AI Engines for Real-Time Detector Alignment 20m
Accurate alignment of detector elements in real-time is essential to maintain the integrity of reconstructed particle trajectories, especially in high-rate environments like the ATLAS experiment at the Large Hadron Collider (LHC). Any misalignment in the detector geometry can introduce systematic biases and potentially affect the accuracy of precision physics measurements. Current calibration systems that correct for these effects require substantial computational resources and these methods often lead to high operational costs and are often unable to handle rapidly changing conditions, leading to systematic inaccuracies and potential biases in physics measurements.
To overcome these challenges, we propose a calibration system that employs a lightweight neural network to predict the misalignment of the detectors in real time. We present a neural network model with hierarchical subset solvers optimized for Versal ACAP that predicts detector misalignment based on the detector’s current geometry and statistical characteristics of reconstructed particle tracks.
We address this by implementing the system on a heterogeneous architecture, partitioning sequential tasks to CPUs while offloading the computationally intensive matrix multiplications to the AI Engines to exploit their specialized vector processing capabilities. We benchmark this implementation against FPGA and GPU to evaluate trade-offs in latency and power across different data types. This work is implemented as a proof of concept for a modern beam telescope to demonstrate the viability of performing inference on the edge at low cost per watt- a crucial requirement for future high-energy physics experiments.
Speaker: Akshay Malige (Brookhaven National Laboratory (US)) -
11:45
Design-Space Exploration and Integer Quantization of Graph Neural Networks for Real-Time FPGA Track Finding 20m
Real-time track finding for displaced-muon signatures in the CMS Level-1 trigger must operate under strict fixed-latency constraints (12.5~$\mu$s) while processing high-throughput detector data. Graph neural networks (GNNs) provide a natural representation of sparse, irregular detector geometries; however, mapping message-passing models to FPGAs requires careful co-optimization of numerical formats, architectural parameters, and high-level synthesis (HLS) microarchitecture.
We present an end-to-end workflow bridging GNN training and FPGA prototyping for a GraphSAGE-based model targeting real-time inference. The pipeline integrates: (i) automated design-space exploration across model dimensions, fixed-point precision, and HLS parameters to expose accuracy--latency--resource trade-offs; (ii) an integer-only INT8 implementation with data-driven bit-width optimization, reducing accumulator and scaling widths while preserving numerical correctness; and (iii) modular C++ kernels synthesized with Vitis HLS and validated through bit-exact C-simulation against Python integer references.
Preliminary validation on the Cora benchmark demonstrates that post-training quantization preserves model accuracy within 0.1\% of the floating-point baseline, while enabling substantial reductions in memory footprint and arithmetic complexity. Bit-exact agreement between software and hardware models is achieved using optimized fixed-point scaling. Quantization-aware training and physics-driven datasets for displaced-muon reconstruction are currently under development.
This work establishes a reproducible methodology for deploying message-passing GNNs on FPGAs under strict real-time constraints, providing a concrete path toward fixed-latency GNN-based track reconstruction in the CMS trigger system.
Speaker: Mr Pelayo Leguina (Universidad de Oviedo (ES)) -
11:45
Event-Driven Spiking Neural Networks for Emergent Robotic Swarm Behavior 20m
Spiking Neural Networks (SNNs) offer inherent energy efficiency for edge robotics, but their dynamic, event-driven, and sparse nature complicates the provision of hard real-time guarantees. This paper presents a lightweight, custom event-driven scheduler implemented on a resource-constrained Cortex-M0+ microcontroller. The scheduler ensures predictable execution by isolating high-priority spike acquisition from lower-priority cascade processing.
We analyze the Worst-Case Execution Time (WCET) of spike propagation using a sensorimotor neural circuit incorporating a Central Pattern Generator (CPG) driven by a command neuron. This architecture, inspired by rhythmic locomotor networks in flying insects, generates a self-sustained and highly predictable rhythmic output from sparse inputs. Its intrinsic regularity makes it well suited for rigorous evaluation of output stability and temporal jitter within a closed-loop control system.
System performance and timing guarantees are demonstrated by integrating the circuit into a simple mobile robot performing real-time obstacle avoidance. The internal SNN processing load and its WCET are modeled as a self-organized critical (SOC) process using the Bak–Tang–Wiesenfeld (BTW) cascading-event model. By enforcing a bounded refractory period and strong inhibitory synaptic weights, the network operates in a sub-critical regime. This enables derivation of a practical upper bound on maximum avalanche size, thereby establishing deterministic temporal stability of the event-driven cascade under varying sensory loads.Speaker: Dr Marcos Turqueti -
11:45
Intelligent Pulse Timing on Digitized Nuclear Signals with Self-Weakly-Supervised Neural Regression Models 20m
Pulse timing is an important task for nuclear radiation detectors and widely applied in nuclear spectroscopy, radiation imaging, high-energy physics, etc. While neural networks emerge as high-performance alternatives for precision timing of detector signals, the requirement of abundant labelled data poses a challenge for traditional supervised learning and limits the application of such methods. To alleviate the algorithmic thirst for labelled data, in this abstract we combine intra-sample self-supervised learning and outer-sample weakly-supervised learning to form a new optimization paradigm for single-channel pulse timing. For the self-supervised part, we use the intrinsic timing correlation between pairs of waveform segments, along with a random shift for regularization, to learn linear models on arrival time; for the weakly-supervised part, we use extrinsic timing labels on a few waveform subsamples to form reference examples, rectifying mismatches between timing bases of different examples caused by intra-sample supervision. Preliminary results show that, with as few as 1 reference example per 16 self-supervised pairs, the neural network model successfully recovers the pulse arrival time from the sampling points of the waveform digitizer, with a clearly unified time base. We conduct simulation with TensorFlow (Keras) deep learning framework for 3 different waveform variations (fixed, discrete, and continuous), with a 1 GS/s digitizer under signal-to-noise ratio of 54.8 dB. The proposed method overcomes the floating base issue when waveform parameters vary, and achieves timing resolution of 0.05 ns (for the fixed), 0.57 ns (for the discrete), and 0.12 ns (for the continuous), respectively.
Speaker: Pengcheng Ai (Central China Normal University) -
11:45
Real-time Autoregressive Evolution of Tokamak Plasma Magnetic Measurements 20m
High-precision and rapid evolution prediction of magnetic measurement signals are essential for tokamak plasma configuration control and the safe operation of discharges. Traditional physics-based models typically suffer from high computational complexity and long execution times, making it difficult to satisfy real-time requirements. Meanwhile, existing data-driven methods often encounter issues with insufficient reliability when performing long-term extrapolation. To address these challenges, this paper proposes a real-time autoregressive magnetic measurement evolution method based on the Transformer architecture. The model takes historical magnetic measurement data and real-time control commands as inputs, leveraging the self-attention mechanism of the Transformer to capture long-range temporal dependencies. At each time step, the model predicts the magnetic measurements for the next step and recursively updates these predictions as subsequent inputs, thereby achieving continuous temporal evolution. Experimental results demonstrate that the proposed single-step autoregressive approach accurately tracks the evolution trajectory of magnetic signals during the discharge process, achieving an accuracy of no less than 92% within a 1000 ms duration. Furthermore, its single-step inference structure is well-suited for low-latency hardware deployment, fulfilling the rigorous demands of real-time control systems. This research provides a high-precision, high-efficiency digital environment for closed-loop control simulators and the development of advanced controllers based on reinforcement learning for magnetic confinement fusion devices.
Speaker: Dr Shuai Li (Institute of Energy, Hefei Comprehensive National Science Center(Anhui Energy Laboratory)) -
11:45
Real-Time Machine-Learning Inference Workflows at FRIB using EJFAT and ESNet 20m
The Facility for Rare Isotope Beams (FRIB) is a United States Department of Energy Office of Science user facility focused on studying problems of national interest in low-energy nuclear physics. Real-time or near real-time analysis methods are critical tools for enabling FRIB science as new detectors and data acquisition technologies which allow for higher data rates and volumes are incorporated into laboratory systems. In addition, many experiments rely on computationally intensive analysis tasks where real-time processing on local computer systems is not feasible. The Energy Sciences Network (ESnet) provides the networking backbone for the ESnet-JLab FPGA Accelerated Transport (EJFAT) Load Balancers which, combined with the EJFAT Event Segmentation And Reassembly (E2SAR) software libraries, provide a platform for streaming data over UDP from FRIB to offsite locations such as the National Energy Research Scientific Computing Center (NERSC), where more computational resources are available to experimenters. An automated workflow was developed to stream data from files over ESnet from an FRIB experiment to NERSC using EJFAT/E2SAR, process digitized detector waveform traces at NERSC using a machine-learning inference framework, extract features of interest from the trace data, and send the results back to FRIB. A summary of the workflow as
Speaker: Giordano Cerizza (Facility for Rare Isotope Beams @ Michigan State University) -
11:45
Real-Time Plasma Density Prediction and Control Framework for Pellet Injection on EAST Tokamak 20m
Precise control of plasma density is critical for high-performance steady-state operation in fusion devices. The candidate fueling method for density control for future tokamak will be pellet injection. However, pellet injection introduces rapid, highly non-linear density perturbations that challenge the latency and accuracy limitations of traditional feedback systems. This study presents a real-time density evolution prediction and controlling architecture for the EAST Tokamak. The core challenge addressed is the requirement to process high-dimensional multi-physics diagnostic data—including magnetic equilibrium parameters, radiation profiles, and Dα signals—and generating reliable future density estimation within the 10ms control cycle required for active feedback.
To achieve this, we deploy a lightweight, attention-based LSTM model optimized for low-latency inference. The system integrates a rolling-window mechanism that synchronizes data acquisition, pre-processing, and model inference of 100Hz. This design ensures that the prediction of density evolution, particularly the sharp rise and decay following pellet ablation, is computed and fed into a Model Predictive Control (MPC) solver within the allocated time budget. Off-line test results with EAST experiment data demonstrate that the system successfully captures complex non-linear dynamics with negligible computational lag. By overcoming the trade-off between model complexity and real-time responsiveness, this framework enables precise, automated trajectory tracking and oscillation suppression during high-frequency pellet injection scenarios.
Speaker: Yucheng Wang -
11:45
Self-Supervised and Semi-Supervised Machine Learning for Waveform-Based Neutron/Gamma Discrimination in EJ-276 Plastic Scintillator 20m
Accurate neutron/gamma pulse shape discrimination (PSD) in plastic scintillators is strongly limited at low energies and further complicated by the scarcity and uncertainty of labeled training data, particularly for mixed neutron–gamma sources such as Cf-252. Conventional supervised deep learning approaches rely heavily on clean labels, which are often difficult to obtain in practical experimental conditions.
In this work, we propose a waveform-based neutron/gamma discrimination framework based on self-supervised and semi-supervised learning to reduce label dependence and improve robustness. A self-supervised pretraining stage is first employed to learn latent representations from large volumes of unlabeled scintillation waveforms using contrastive learning. The pretrained model is then fine-tuned with a limited set of labeled neutron and gamma events acquired from Cf-252 and Co-60 sources. Experimental results using an EJ-276 plastic scintillation detector show that the proposed approach significantly improves discrimination performance in the low-energy region below 150 keVee, achieving higher Figures of Merit compared to fully supervised models trained on the same labeled datasetSpeaker: Hai Vo -
11:45
Studies for a track finder algorithm based on Graph Neural Networks for the MEG II experiment 20m
The MEG II experiment at PSI searches for the charged lepton flavour violating decay $\mu^+ \to e^+\gamma$ with unprecedented sensitivity. Fast and efficient positron track reconstruction is a key challenge for online data processing, as the cylindrical drift chamber is a gaseous detector with intrinsically slow response and is therefore not used in the first-level trigger. Moreover, the complex detector geometry and the non-solenoidal magnetic field make conventional track fitting computationally expensive.
We present a novel track finding approach based on Graph Neural Networks (GNNs), designed to enable a faster and more efficient reconstruction of positron tracks from drift chamber hits. Hits are mapped to graph nodes, with edges encoding geometrical and temporal compatibility, allowing the GNN to identify track candidates in high-occupancy conditions. The method significantly reduces the computational cost of pattern recognition and track finding compared to traditional algorithms.
The model is trained on both Monte Carlo simulations and MEG II data. We report performance in terms of efficiency, resolutions and execution time, highlighting the improvements obtained. This technique opens the possibility of exploiting tracking information during data taking: future studies could allow to implement the optimized GNN algorithm in the trigger logic of future $\mu \to e\gamma$ experiments.
Speaker: Antoine Venturini (INFN Pisa)
-
11:05
-
11:05
→
12:05
Poster session: Real Time Diagnostics, Digital Twin, Control, Monitoring, Safety and Security
-
11:05
Application of the FPGA Coprocessor TM FAST for Siemens S7-1500 PLCs in Neutron Instruments 20m
The Siemens FPGA Coprocessor TM FAST for Siemens S7-1500 PLCs supports processing times in the order 50ns, which is several orders of magnitude below the PLC cycle time, as well as count rates in the order of a few MHz.
Since TM FAST supports isochronous mode, the TM fast processing cycle can be synchronized with PLC tasks as well with the PROFINET of PROFIBUS decentral devices. This allows the usage of TM FAST in motion applications, e.g. for the readout of encoders with protocols not support by standard Siemens PLC modules or for the synchronization between detectors and motion axes.
Another typical application is the readout of simple detectors, e.g. neutron monitors or Helium-3 proportional counters. Due to the lower limit of PLC cycle times and the limited IO size of TM FAST, it has only limited capabilities for the readout of time resolved detectors which requires careful consideration.Speaker: Harald Kleines -
11:25
A common framework for real-time structured data communication in fusion devices 20m
This work presents the development of a unified software environment that allows real-time communication of structured data in the control system of fusion devices. The main aims are: enabling the seamless integration of MATLAB/Simulink(R) control algorithms, which often rely on structured data for signal I/O, into machine-specific control system software; and to do so by developing a single software framework for multiple fusion devices. The project updates three key open-source frameworks—SCDDS for control algorithm analysis and design, MARTe2 for real-time execution, and MDSplus for data management—ensuring their interoperability and broad applicability. Central to the effort is the creation of robust data and signal interfaces that allows structured data interaction among SCDDS, MARTe2, MDSplus, and MATLAB/Simulink(R). Structure and structure array support has been added to MDSplus: MDsplus is now capable of storing structured data in single nodes, in the form of Dictionaries and List of Dictionaries; SCDDS is now capable of correctly reading and writing structures to and from MDSplus Dictionaries and List of Dictionaries; the MARTe2-Simulink interface algorithm has been completely revised and updated to integrate support for structure arrays, both as signals and as parameters. Moreover, an interface has been developed to allow loading structured parameters from MDSplus to MARTe2. The resulting software is designed to be compatible with major experimental platforms, including TCV, DTT, and RFX-mod2, as well as other devices employing MARTe2 as their real-time framework. Furthermore, the architecture anticipates future extension to systems using different frameworks, thereby enhancing scalability and long-term relevance.
Speaker: Dr Nicolo Ferron (Consorzio RFX) -
11:45
A Low-Latency Distributed Machine-Protection System for the PIP-II Linear Accelerator 20m
PIP-II at Fermilab features a high-intensity 800 MeV superconducting linear accelerator (linac) requiring a robust Machine Protection System (MPS) to prevent beam-induced damage to delicate cryomodules and vacuum components. A critical element of this system is the fast Analog Machine Protection System, a high-bandwidth platform designed for real-time beam loss monitoring. The system utilizes a modular, FPGA-based architecture to digitize signals from beam-sensing devices, such as AC Current Transformers (ACCTs), non-invasive Ring Pickups (RPUs) and beam scrapers.
To ensure galvanic isolation and noise immunity, digitized data is transmitted via fiber-optic links to central processing units that perform differential current analysis across the linac. The system is engineered for ultra-low latency, achieving a total response time of less than 10 microseconds from fault detection to beam inhibition. This paper presents the hardware selection and signal conditioning strategies for the MPS remote digitization nodes for the PIP-II linear accelerator.
Speaker: Dr Jonathan Daniel Eisch (Fermi National Accelerator Lab. (US)) -
11:45
A Virtual Gamma-ray Spectrum Database for Training in Seawater Radioactivity Analysis 20m
Since the Fukushima Daiichi Nuclear Power Plant accident, interest in marine radioactivity monitoring has increased significantly, increasing demand for trained personnel capable of reliable gamma spectrometric analysis. However, education and training in seawater radioactivity measurement remain challenging, as low-level radionuclide analysis typically requires expensive high-purity germanium (HPGe) detectors and radiochemical pre-concentration processes such as ammonium phosphomolybdate (AMP) coprecipitation.
In this study, an education-oriented virtual marine radioactivity spectrum database is developed to support training without reliance on actual measurement infrastructure. Background gamma-ray spectra were derived from experimentally measured seawater samples, while artificial radionuclide spectra were independently calculated using Monte Carlo simulations. These two components were combined to construct a library of virtual training spectra, allowing systematic variation of radionuclide type and activity concentration. This approach enables generation of diverse and realistic training datasets representing a wide range of artificial radionuclide scenarios in marine environments.
To ensure applicability to practical training scenarios, representative seawater sample geometries and detector configurations were defined, and virtual spectra were generated for varying activity levels and counting conditions. The generated spectra were validated through energy calibration and spectral consistency checks against experimental measurements, demonstrating agreement within acceptable limits for educational use.
The resulting database enables trainees to practice spectrum interpretation, peak identification, and basic quantitative analysis in a realistic yet controlled environment. This approach is intended for capacity-building programs in Pacific islands where marine monitoring needs are increasing but radiological infrastructure is limited. The proposed framework provides a scalable and cost-effective solution for marine radioactivity education.
Speaker: Wanook Ji (Korea Atomic Energy Research Institute) -
11:45
AI Enhanced LLRF Controller for Ultra-lightweight SC Cyclotron CYCIAE-100B 20m
An ultra-lightweight, high-temperature superconducting cyclotron, CYCIAE-100B, is under development at the China Institute of Atomic Energy (CIAE). This cyclotron is designed for use in specialized environments and requires fully automatic, unattended RF system operations. The cyclotron RF cavity is driven by a 20-kW solid-state power amplifier, directly coupled to the high-Q load, at 77 MHz under the control of the LLRF system. The LLRF system is responsible for maintaining stable amplitude and phase control of the RF field and for determining when the RF voltage should be established or evacuated. This paper presents a digital LLRF controller designed for the CYCIAE-100B, which operates in generator-driven mode. The hardware is based on the SOC chip ZYNQ-7045 and incorporates high-speed AD/DA converters, enabling digital demodulation/modulation and a fast PID algorithm for amplitude-phase control. A phase-adjusting network is included to align the high-speed sampling phase. For automated fault recovery, advanced interlock protection logic is implemented in the firmware. Spark detection is performed using digital signal tracking, and corresponding protection is implemented via fuzzy logic control. Additionally, supervised learning is integrated into the LLRF system to diagnose RF system faults. RF system parameters (e.g., the driven, detuning, forward, reflected, and sparks) are recorded in real time upon trigger in testing of the cyclotron. ML failure module training will be conducted using this dataset and is expected to yield valuable results for this LLRF protection logic. This paper reviews the design of the digital LLRF controller and presents preliminary test results.
Speaker: Tianyi Jiang (China Institute of Atomic Energy) -
11:45
Deep Learning–Based Real-Time Error Detection in Radiotherapy with EPID Images 20m
Accurate in-vivo dose monitoring is increasingly important in radiotherapy to comply with stringent dose delivery guidelines. Electronic Portal Imaging Devices (EPIDs), routinely used for patient positioning, offer the potential for real-time treatment monitoring. However, EPID images are only indirectly related to patient dose, and their use for error detection is challenging. In this study, an in-house developed DL model was used to transform raw EPID images into water equivalent portal dose images. Particularly, we assessed the feasibility of using these DL-generated portal dose images for application in a real time error detection alert system in phantom experiments, performed at the Careggi university hospital (Firenze, Italy).
Different phantoms were irradiated under reference conditions and with intentionally introduced treatment errors, including deviations in monitoring units (MU) and phantom shifts. EPID images were converted to portal dose images using the DL model, and several comparison metrics were evaluated, including gamma-index analysis and relative mean absolute dose difference (reMADD). The results demonstrate that MU errors can be reliably detected using a combination of metrics. Gamma passing rates decreased with increasing MU errors, while reMADD provided consistent sensitivity across different phantoms and field sizes. Setup errors were more challenging to detect, particularly for narrow fields, with reMADD showing slightly better sensitivity than gamma analysis.
Overall, the proposed DL-based portal dose comparison framework is appropriate for application in a real time alert system for detection of treatment errors. Future work will focus on extending the approach to clinical patient data.
Speaker: Emmanuel Uwitonze (National Institute for Nuclear Physics, Section of Pisa, Pisa, Italy) -
11:45
Design and Implementation of a Space Radiation Microdosimetric Detection Prototype 20m
A space radiation microdosimetric prototype has been developed as a real-time sandwich pixel detector system for studying radiation effects on human cells. The system employs two silicon pixel detectors arranged in a stacked geometry, with a microfluidic chip for cell culture positioned between them. By exploiting the high spatial resolution of the pixel detectors and their event-by-event digital measurement of energy deposition, the prototype supports micrometer-scale correlation of particle incidence positions and deposited energy. Each detector features a 10-μm-thick sensitive layer, a pixel pitch of 6.5 × 6.5 μm2, and a 2048 × 2048 pixel matrix. To support the stacked configuration, a modular and synchronized real-time readout system is implemented based on a Xilinx Zynq UltraScale+ MPSoC platform. The system integrates multi-rail detector power supply, detector configuration, precise timing control, data alignment and training, continuous image acquisition, and high-speed data transmission. Background measurements show stable dark data performance, with row-wise mean values uniformly distributed between of 5 and 7 digital number (DN) and RMS noise below 0.5 DN for most rows, indicating good baseline stability and noise uniformity. Preliminary laboratory tests using 60Co and 90Sr sources show consistent energy peak positions between the upper and lower detectors, while differences in counting rates are observed. Future work will focus on precise spatial synchronization and accelerator-based energy calibration using proton beams of different energies to establish correlations among particle events recorded by the stacked detectors.
Speaker: Dr 晓燕 高 (山东大学) -
11:45
Design of an Integrated Control System for Automated Operation of Large Array of Imaging Atmospheric Cherenkov Telescope 20m
To achieve highly reliable and efficient automated observation and operation of Large Array of Imaging Atmospheric Cherenkov Telescope, this paper presents the design of the core architecture and key modules for an integrated control system.
Addressing the core challenges of LACT, including distributed multi-device deployment, integration of heterogeneous equipment, and coordination of complex observation workflows, a distributed and modular system design based on the TANGO control framework is proposed.
The design functionally decouples and integrates key subsystems—such as telescope drive control, front-end electronics monitoring, mirror adjustment, environmental monitoring, and real-time pointing calibration—establishing a unified control model and data interface.
The system design focuses on an automated workflow engine capable of parsing observation plans, monitoring equipment status, and scheduling task sequences.
It also incorporates an integrated operational module combining data archiving, state alarming, and preliminary fault diagnosis to support stable, unattended or minimally attended operation.
For user interaction, the design proposes a front-end/back-end separated architecture based on web technologies, laying the foundation for a future unified portal encompassing observation control, status visualization, and data management.
The proposed system design aims to meet the fundamental requirements of scalability, reliability, and automation for LACT's future large-scale construction and scientific operation, while also providing a referential paradigm for the integrated control of similar large-scale astronomical facilities.
Keywords: Large Array of Imaging Atmospheric Cherenkov Telescope; Automated Control; Integrated Control System; Distributed Architecture; TANGO Framework; System DesignSpeaker: Ms Si MA (Institute of High Energy Physics, Chinese Academy of Sciences) -
11:45
Design of Transmission Cable Delay Calibration Method for Large-Scale Electron Accelerators 20m
The beam position monitoring system is an important component of electron accelerator systems. Signals from the button-type probes can be used to acquire various information about the beam bunch, such as the transverse position, longitudinal phase, charge quantity, and bunch length. In bunch-by-bunch longitudinal position measurement, it is necessary to accurately measure the temporal information of each bunch. Currently, the temporal resolution for such measurements can reach the sub-picosecond level. However, in large-scale electron accelerator systems, signal delay variations caused by long-term drift and temperature drift in cables can lead to significant deviations in bunch-by-bunch longitudinal phase measurement results. To address this problem, a cable delay calibration method has been developed to enable real-time measurement of transmission cable delay variations during beam experiments. For systems that process narrowband signals or do not require prolonged broadband transmission, multiple sinusoidal signals with constant amplitude and fixed frequency intervals are introduced at the input of the transmission cable. At the output, phase differences between these signals are extracted through high-speed sampling and digital signal processing, enabling accurate determination of cable delay variations. A dedicated calibration signal module has been designed and integrated into the constructed test platform. A series of tests have been conducted, and the results show that the time precision of the proposed cable delay calibration method meets the application requirements.
Speaker: Dr Zhe Cao (University of Science and Technology of China) -
11:45
Fueling modeling and control for ITER start of research operation 20m
ITER's Start of Research Operation (SRO) targets aims for plasmas reaching ~7.5 MA for durations exceeding 100 seconds. Effective control of fueling, density, impurity dosing, edge-localized modes (ELMs), and H-mode transitions is critical. To support model-based controller design, the Gas Injection System (GIS) and Pellet Injection System (PIS) have been modeled such that a complete fueling control system can be developed and assessed to address the complex requirements and challenges unique to ITER. These challenges involve lag-time due to lengthy gas lines, fueling efficiency decay at high plasma temperatures and densities, synchronization of multiple actuators, and balancing ELM pacing with fueling needs using an advanced Actuator Manager (AM).
The GIS and PIS models utilize 1-D particle transport models for the gas flow and diffusion through the pipe and the pellet transport through the Flight Tubes (FTs), which have been implemented within the Plasma Control System Simulation Platform (PCSSP). Furthermore, the particle deposition into the plasma and the plasma-neutral interactions are modeled through the RApid Plasma DENsity Simulator (RAPDENS). The density and neutral pressure are also monitored and controlled with respect to the various density and pressure limits at all phases of the plasma, from prefill to termination. The results presented here demonstrate the complete feedback control and integration of the GIS, PIS, AM, RAPDENS, and limit monitoring functions for modeling the smooth transitions between fueling modes and effective handling of actuator failures. This poster presents the architectural design, simulation results, and future strategies for optimizing fueling control on ITER.Speaker: David Weldon (CEA, Cadarache) -
11:45
High-dose-rate precise ionization chamber with real-time electronics for SC cyclotron-based proton therapy system 20m
A 240 MeV microampere-level extracted beam is now available from the superconducting cyclotron CYCIAE230, which was designed by the China Institute of Atomic Energy (CIAE). To stabilize the beam current of the cyclotron, a non-intercept high-dose-rate ionization chamber (IC) system is designed and tested at the cyclotron exit, providing real-time proton beam current measurements for feedback control. The same design is used for the irradiation station downstream of the beamline, for proton FLASH studies, and the radioactive-effect studies of integrated circuits in aerospace applications. A laser-etched, ultra-thin, gold-plated PI film is selected for both the integral plane and the multi-strip cathode in this design to increase the IC's lifetime, aided by a dry nitrogen gas system. Environmental compensation circuits, integral circuits, and front-end 128-channel ADCs are integrated into the IC to improve the system's signal-to-noise ratio and accuracy. The multichannel charge readouts are acquired via an ultra-thin multichannel coaxial cable that provides high-speed digital communication between the ADCs and the readout SOCs. A bare-metal C++ program is developed to run on a ZYNQ processor and provide real-time current and charge readings. The IC and its readout electronics have been integrated and tested with a proton beamline of the CYCIAE230 cyclotron. The preliminary test results show that high accuracy and repeatability can be achieved after calibration using the Faraday cup downstream. The design of the IC, the integrated circuits, the readout electronics, and the software will be reported in this paper, along with preliminary test results from high-dose-rate measurements.
Speaker: xiaoqing ren -
11:45
Ion Irradiation facility with Performance Online Monitoring for High Temperature Superconducting Tape 20m
A ultra-compact multi-particle superconducting cyclotron is currently under construction at the China Institute of Atomic Energy (CIAE). It is designed for the irradiation modification of high-temperature superconducting (HTS) tapes and medical isotope 211At production. An ion irradiation facility with online monitoring of temperature and critical current performance for HTS tapes have been designed. The facility comprises the following systems: 1) Beam transport system: This includes multiple sets of quadrupole magnets and an octupole magnet for beam shaping and modulating to get a big irradiation space with high uniformity. These efforts have culminated in a large-area, uniform beam spot (not smaller than 2×2 cm) at the irradiation terminal, with a uniformity better than 90%. 2) Ion irradiation terminal: This terminal features a high-precision roll-to-roll transport system, a high-vacuum system, an efficient cooling system, and a in house developed online monitoring system for temperature and critical current performance. It enables real-time monitoring of temperature changes during the continuous ion irradiation process, as well as real-time, non-destructive measurement of the critical current of HTS tapes. This capability facilitates dynamic research into the relationship between irradiation-induced damage and the degradation of HTS critical current performance. This facility combines advanced accelerator beam technology, high efficiency cooling system, high vacuum technology, and real-time monitoring and diagnostic techniques, providing a stable, efficient, and fully functional comprehensive experimental device for deeply revealing the mechanism of ion irradiation on the micro-defects and critical current performance of HTS tapes. This facility will be presented in detail in the paper.
Speaker: xiaofeng zhu -
11:45
Optimising SiPM Array Architectures and Optical Coatings for High-Efficiency ZnSe(Al,O)–Based Strontium-90 Detection 20m
Strontium-90 (Sr-90) contamination of groundwater at nuclear sites like Sellafield (UK) and both the Savannah River Site and Hanford (US) remains an environmental challenge. Sr-90's high mobility and solubility demand monitoring systems capable of real-time, in-situ measurement.
Recent work towards a portable, aquatic beta spectrometer has shown that ZnSe(Al,O) is a strong scintillator candidate for detecting and, characterising Sr-90. The current prototype couples a 42 mm diameter, 2.1 mm thick ZnSe(Al,O) crystal to a single SiPM. It was able to achieve 61.5% efficiency when benchmarked against an idealised optical simulation. Improving this efficiency and improving energy resolution requires transitioning to a multi-SiPM configuration. To address this, a Geant4-based optical model has been developed to evaluate SiPM number, spatial arrangement, and the choice of surface coating needed to minimise photon loss. Simulation results show diminishing returns beyond an 8 x 8 SiPM array, enabling optimisation of performance relative to system cost. Applying a coating to crystal improved collection efficiency by at least 35% compared to un-coated. Further simulations and experimental validation are to follow shortly. This optimisation stage is a critical step toward producing a fully field-deployable, high-efficiency Sr-90 spectrometer for groundwater monitoring.Key Words: ZnSe(Al,O) scintillator detectors, environmental monitoring, real-time radiation instrumentation, silicon photomultiplier arrays, optical transport modelling.
Speaker: Ms Arjana Kolnikaj (University of Glasgow) -
11:45
Physics-Aware and Hardware-Efficient Federated Learning for Isotope Identification in Distributed Edge Radiation Networks 20m
Real-time and accurate isotope identification is critical for nuclear safety. Traditional methods rely on continuously transmitting raw spectral data from edge nodes to a centralized server, imposing severe bandwidth constraints and privacy vulnerabilities. While Federated Learning (FL) offers a decentralized alternative for this scenario, its deployment in radiation networks is hindered by feature misalignment caused by spectral drift and latency constraints on edge devices. To address these challenges, we propose a Physics-Aware Hardware-Efficient Federated Learning (PAHE-FL) framework. First, we embed a physics-aware preprocessing module that utilizes the ubiquitous K-40 background signature for unsupervised gain stabilization, ensuring spectral consistency across edge devices. Second, to tackle non-IID environmental interference, we design a decoupled training strategy where the feature extractor is aggregated globally to learn universal characteristics, while classifier heads are updated locally to adapt to specific backgrounds. Finally, the model is optimized via post-training quantization to enable deployment on resource-constrained microcontrollers. Experimental results on STM32 devices demonstrate that PAHE-FL achieves millisecond-level latency and superior robustness, significantly outperforming standard approaches in dynamic environments.
Speakers: Ms Yuxin Wu (Tianjin University of Technology), Kai Shi (Tianjin University of Technology) -
11:45
Realization of Large Model-Driven Tango Control and Data Interaction Analysis Tool 20m
A highly flexible experimental system, integrating custom control/data acquisition hardware and supporting software, is developed to address the dynamic and diverse demands of experimental tasks. Built on Tango control architecture with multi-protocol support (serial, TCP/IP, MQTT), it enables unified management of various devices and rapid construction, debugging, and deployment of experimental setups. The system incorporates an innovative LLM-driven interactive tool that automates code generation/execution, analyzes debugging information and experimental data as per user instructions, and realizes intelligent fault diagnosis and closed-loop repair. Successfully applied in marine observation and preliminary Hunt experiments, it has proven efficient in debugging self-developed sensor modules, real-time data acquisition, and seamless multi-device integration. This practical application fully validates the system’s high efficiency, reliability, and strong adaptability in complex engineering scenarios.
Speaker: Xiaochuan Xie (IHEP) -
11:45
Temperature-Induced Delay Drift in Xilinx FPGA Multi-gigabit Transceivers 20m
In large-scale physical experiments, the multigigabit transceivers (MGTs) in field-programmable gate arrays (FPGAs) have become a prevalent choice to implement high-precision clock distribution and synchronization. However, the temperature-induced delay drift in MGTs and other electronic components has not been well investigated, posing a significant challenge to synchronization stability at picosecond levels. This paper first characterizes the temperature coefficients of MGTs in Xilinx Kintex UltraScale+ FPGA and then implements temperature compensation in a clock distribution and synchronization system to demonstrate performance enhancement. Utilizing an on-chip, temperature-robust Dual-Mixer Time Difference (DDMTD) method, the temperature coefficients in the transmitter (TX) and receiver (RX) are measured as 1.42 ps/°C and 0.59 ps/°C respectively. After applying compensation, within a temperature range from 35°C to 80°C, the maximum drift of the clock distribution is reduced from 90.7 ps to 7.6 ps. The significant temperature effect in the TX is theoretically analyzed at the end of this paper.
Speaker: Yonggang Wang (University of Science and Technology of China) -
11:45
The Realization of Ultra-High Dose Rate and Potential Frontier Applications on 230 MeV SC Cyclotron CYCIAE-230 20m
A cyclotron based proton therapy system developed by CIAE includes a superconducting cyclotron CYCIAE-230, a beam line with a fast energy selection system, a 360 ° gantry, and a pencil beam scanning nozzle. There is another beam line for proton irradiation, used for research of radiation effects in electronics and power devices. In the past two years, the performance of CYCIAE-230 has been significantly improved. We optimized the insulation structure of the cathode inside the micro-PIG source for high power operation; adjust the first gap in the central region; increase the duty cycle of the acceleration system from 35% to CW; fine tune the first harmonics and positions of HV electrostatic deflectors for beam extraction to increase the efficiency, improve the vacuum around the HV feeding to significantly reduce the multipactoring, and enhance the operational stability. The extracted beam intensity up to 1600 nA has been obtained, providing sufficient current for achieving ultra-high dose rate. Based on conservative estimates, the dose rate of CYCIAE-230 will be much higher than 3600Gy/s. And as published previously, it is 45 ms the time interval varying one energy step of the degrader and 51 units of the magnets, and it is better than 0.3mm the isocenter of 360° gantry. It is obvious that the potential frontier applications could be FLASH and Spot-Scanning Arc Therapy. The 5╳1011p/cm2/s high fluence rate, CW beam, wide energy range, high uniformity proton irradiation facility, based on CYCIAE-230 is also suitable for research on radiation and biological effects.
Speaker: Prof. Tianjue Zhang (China Institute of Atomic Energy)
-
11:05
-
12:05
→
14:30
Lunch break 2h 25m
-
14:30
→
18:45
Excursions - Free time 4h 15m
-
18:45
→
20:00
WIE Bonaparte Room (Hotel Hermitage)
Bonaparte Room
Hotel Hermitage
-
08:30
→
09:10
-
-
08:30
→
09:10
Invited talk Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
-
08:30
Fusion 40mSpeaker: Axel Winter (Institut für Plasmaphysik Geifswald)
-
08:30
-
09:10
→
10:10
Data Acquisition and Trigger Architectures Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
-
09:10
Timing Distribution Method via 10-km Optical Fibers at the J-PARC for the Hyper-Kamiokande Project 20m
The T2K (Tokai-to-Kamioka) is a long-baseline neutrino oscillation experiment that requires a reliable timing distribution system at the J-PARC to receive the main-ring (MR) beam-kicker timing pulse signal and broadcast beam-trigger pulse signals with spill numbers to neutrino facilities in Tokai. The existing timing distribution system, originally developed in the K2K (KEK-to-Kamioka) era more than two decades ago, increasingly suffers from component obsolescence and occasional signal dropouts. Furthermore, the upcoming Hyper-Kamiokande project will install an Intermediate Water Cherenkov Detector (IWCD) approximately 1 km from the J-PARC, and the required transmission distance around 10 km when routed through the site-wide fiber infrastructure.
We propose a timing distribution method based on timing-pulse digitization and the MIKUMARI link protocol. In the timing distribution system, a main board digitizes the MR beam-kicker timing pulse signal by an FPGA ISERDES primitive and distributes its leading-edge timestamp together with a spill number to sub boards by the MIKUMARI link protocol via optical fibers. Based on the MIKUMARI link, the main board and sub boards maintain local time counters under frequency synchronization. The sub boards at the neutrino facilities map the received timestamp to their local counters and regenerate the beam-trigger pulse at the corresponding time using FPGA OSERDES primitives. Using FPGA I/OSERDES primitives to digitize and regenerate pulse signals, the system achieves sub-clock timing resolution.
In this presentation, we will report the details of the method that can transmit the pulse signal over 10 km optical fibers with a timing variation of 1 ns.Speaker: Che-Sheng Lin (SOKENDAI) -
09:30
TD-Link: A Deterministic Optical Daisy-Chain Link for Synchronous Data Acquisition in Large-Scale Detector Systems 20m
Modern large-scale nuclear and particle physics experiments require front-end readout systems capable of handling high data throughput while guaranteeing sub-nanosecond time synchronization over widely distributed detector elements. This paper presents TD-Link, a custom optical link protocol developed by Nuclear Instruments and CAEN for the FERS (Front-End Readout System) architecture. TD-Link is specifically designed to provide simultaneous data transmission and deterministic timing synchronization over a single optical fiber.
Unlike conventional timing and data distribution solutions such as White Rabbit, TD-Link adopts a multidrop daisy-chain topology. A single optical fiber originates from a Data Concentrator, traverses sequentially up to 16 FERS front-end boards, and finally closes the loop back to the concentrator. This ring-based architecture significantly reduces cabling complexity and port count, making it particularly suitable for large detectors where front-end boards are spatially distributed over extended areas. A single Data Concentrator equipped with eight optical ports can therefore manage and synchronize up to 128 front-end boards, each hosting 64 to 128 acquisition channels, enabling scalable systems with thousands of channels.
TD-Link operates at a line rate of 3.25 Gb/s and integrates an embedded clock recovery and distribution mechanism on each front-end board. Experimental measurements demonstrate an inter-board synchronization resolution of 35 ps RMS, validated using independent FERS boards equipped with CERN PicoTDC devices and correlated signal injection. These results confirm TD-Link as a compact, scalable, and high-precision solution for next-generation real-time readout systems.Speaker: Dr Andrea Abba (Nuclear Instruments SRL) -
09:50
Commissioning and Low Latency Operation of the Graph Neural Network Electromagnetic Calorimeter Trigger at the Belle II Experiment 20m
We present the commissioning and operation of the Graph Neural Network Electromagnetic Calorimeter Trigger Module (GNN-ETM) of the Belle II experiment at the SuperKEKB collider. The GNN-ETM processes calorimeter trigger cells as graph nodes to perform clustering and feature extraction. We fully integrate the system with the successive stages of the first-level trigger, develop slow-control drivers, and add online monitoring capabilities. We optimize the existing FPGA-based architecture through hardware–algorithm co-design to reduce the overall system latency from 3.141 us to 1.050 us. Our hardware implementation is validated through register-transfer-level simulations, achieving bit-accurate agreement with the offline reference model. Online monitoring enables the measurement of instantaneous trigger rates, providing a quantitative basis for trigger-level performance studies. In summary, we report on the GNN-ETM as a fully operational, low-latency trigger module with online control and monitoring capabilities.
Speaker: Marc Neu
-
09:10
-
10:10
→
10:40
Coffee break 30m
-
10:40
→
11:25
Mini Orals Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
Convener: Martin Grossmann PSI (Paul Scherrer Institut)-
11:00
134 FPGA based RDMA for BEE Readout 20m
Traditional TCP/IP protocols rely heavily on CPU processing for tasks such as data packet encapsulation, parsing, and transfer, leading to substantial latency and resource consumption. In contrast, Remote Direct Memory Access (RDMA) enables direct data transfer between the network adapter and memory, bypassing the operating system kernel. This approach significantly reduces CPU overhead while delivering high bandwidth and ultra low latency, making it an efficient solution for high performance and data intensive applications.
This paper presents the design and implementation of an FPGA based RDMA protocol stack for the BEE readout system of the Circular Electron Positron Collider (CEPC). The implementation leverages the RoCEv2 protocol over standard Ethernet, utilizing the CMAC IP core for link and physical layer processing, a custom designed RDMA core for protocol operations, and DDR for data buffering. By enabling kernel-bypass and zero-copy communication, the system can substantially lower both communication overhead and latency in the CEPC BEE-to-DAQ data link, thereby supporting the deployment of advanced software-based high-level triggers (HLTs) under extreme event rates.
Speaker: Chang Xu (IHEP, UCAS) -
11:05
115 Modular Ground Penetrating Radar Tomography System with a Combination of Star and Daisy Chain DAQ and Trigger Topologies with Picosecond Accuracy 20m
The AgraSim experiment at Forschungszentrum Jülich is a unique laboratory for research on the impact of climate conditions on agricultural ecosystems and for the optimization of climate models. For the high-resolution 3D tomography of soil parameters, we develop a modular Ground Penetrating Radar (GPR) monitoring system with 39 DAQ modules, each containing 64 antennas. The modules will be mounted around a soil-filled lysimeter with a height of 1.5 m and a diameter of 1 m. The GPR emits a 3 ns long time domain pulse generated by one of its modules DACs. The signal is guided through a 1-to-64 multiplexer to one antenna, penetrates through the soil and is received by another antenna at the other side of the lysimeter. The received signal is amplified and multiplexed to one of eight ADCs of the corresponding DAQ module. This is done for all antenna combinations. All DAQ modules are controlled, triggered and synchronized by a central main module that also collects the measurement data from the modules and forwards them to an external storage. The topology of the DAQ system is a mixture of a star topology for the first 13 modules followed by a daisy chain topology to connect two additional DAQ modules to each of the first modules. A full tomogram will take about 8 s and contains up to 25 GB of data. Initial evaluations show a synchronisation accuracy of about 40 ps. A system with two DAQ modules was recently tested at a real lysimeter.
Speaker: Dr Achim Mester (Forschungszentrum Jülich GmbH, ITE) -
11:05
135 FPGA-Based Deep Learning Acceleration for Real-Time $z$-Vertex Reconstruction in STCF L1 Trigger 20m
The Super Tau-Charm Facility (STCF) experiment will operate at high instantaneous luminosity, placing stringent requirements on the real-time performance of its L1 trigger system. Fast and reliable reconstruction of the primary vertex position along the beam axis z is essential for effective background suppression and early event selection under fixed latency constraints.
We present a deep neural network for real-time z-vertex reconstruction designed for deployment in the STCF L1 trigger. The network operates on high-level features extracted from track segments reconstructed in the central drift chamber. For each track candidate, information from eight super-layers is used, including the local track angle, relative azimuthal angle, and drift time. A lightweight attention mechanism is applied to dynamically reweight contributions from different detector layers, followed by one-dimensional convolutional and fully connected layers to exploit geometric correlations while maintaining hardware efficiency.
The proposed models are trained using simulated STCF events and evaluated against offline reference vertices. A z-vertex resolution of approximately 5 cm is achieved, which is sufficient for trigger-level discrimination and remains stable across the studied kinematic range. The networks are optimized and implemented on FPGA using the hls4ml framework with quantization-aware training, enabling fixed-latency and fully pipelined inference.
A systematic design space exploration is performed to study the trade-offs between vertex resolution, inference latency, and FPGA resource utilization. The results demonstrate that the proposed architectures satisfy L1 trigger latency constraints while offering flexibility for further optimization. This work highlights the feasibility and potential of deep learning–based vertex reconstruction in real-time trigger systems.Speaker: Shuangshuang Zhang (Shandong University) -
11:05
136 Implementation of an Acquisition and Monitoring Module Based on the PTPv2 Protocol on EAST 20m
There are a large number of distributed control and monitoring devices in magnetic confinement fusion facilities. It is essential to conduct real-time monitoring of key parameters (such as temperature and voltage) of these devices, which is of great significance for ensuring the safe operation of the facilities and their associated systems.
This paper designs a set of monitoring equipment with Field Programmable Gate Array (FPGA) as the core based on the PTPv2 protocol. First, the equipment supports the PTPv2 protocol to ensure consistent timestamps among acquisition and monitoring devices. Second, it is equipped with multi-range voltage conditioning and wide-temperature-range temperature acquisition channels, which, combined with a high-resolution analog-to-digital conversion module, enable dynamic acquisition and data processing of weak signals. Third, the equipment features self-test and remote start-up functions, facilitating remote device management.
Test results show that the synchronization accuracy between devices is better than 50 ns; the voltage measurement accuracy is better than 0.04% F.S. in the low range (0–1 V) and better than 0.02% F.S. in the high range (1–10 V); within the temperature range of 20 K to 325 K, the acquisition error does not exceed 0.13 K. All these indicators meet the design requirements.Speaker: Zuchao Zhang (Institute of Plasma Physics,Chinese Academy of Sciences) -
11:05
148 Data Processing Firmware of the Upstream Tracker for LHCb Run 3 20m
LHCb, one of the four main experiments at CERN, was upgraded for Run 3 to enable fully software-based triggering and data acquisition. A key component of this upgrade is the Upstream Tracker (UT), a silicon microstrip detector installed in early 2023. The UT comprises 968 silicon sensors and approximately 4192 SALT ASICs, which perform analog processing, digitization, common-mode subtraction, zero suppression, and data serialization. The resulting data streams are transmitted to the TELL40 back-end readout boards.
Each TELL40 board can be viewed, in simplified terms, as a set of optical fibre receivers connected to a high-performance Intel Agilex 10 FPGA, responsible for data processing and transfer to a PCIe interface.
This contribution focuses on the TELL40 FPGA gateware developed for the UT readout. It describes the challenges posed by the UT electronics architecture, data formats, high data rates and zero deadtime. It presents the adopted mitigation strategies, which reduced system complexity at the cost of specific trade-offs.
In addition to the core data acquisition functionality, several monitoring and control mechanisms were implemented within the Experiment Control System (ECS) and the Timing and Fast Control (TFC) system. The resulting architecture and the key design decisions are presented.
Finally, the development of a system with custom inputs and outputs required the application of dedicated design verification and debugging techniques, as well as the development of supporting software, which are also described.
Speaker: Carlos Abellan Beteta (University of Zurich (CH)) -
11:05
150 A Real-Time Trigger and DAQ System for the MuEDM Experiment 20m
The Electrical Dipole Momentum (EDM) of fundamental particles are in-
timately connected to the violation of time invariance T and the combined
symmetry of charge and parity CP. The MuEDM experiment aims to measure
the muon EDM with enhanced sensitivity using, for the first time worldwide,the frozen spin technique by studying the up–down asymmetry of positronsfrom muon decay. A polarized muon beam from the Paul Scherrer Institute(PSI) is injected into a uniform magnetic field region, where muons are stored and observed. A fast and selective trigger system is a critical component of the experiment, responsible for identifying storable muons and activating the pulse kicker within a latency below 105 ns. The MuEDM Muon Trigger Detector (MTD) is based on an anti-coincidence scheme between Gate and Aperture plastic scintillators, optimized for high efficiency, compactness, and strong background rejection. The trigger concept has been validated in test beams performed in 2022 and 2024, showing more than 97% agreement between measurements and simulations. In parallel, the data acquisition system has been developed around the CAEN FERS A5202 electronics, providing high channel density, precise timing, integrated SiPM biasing, and triggered readout capabilities. The DAQ has been fully integrated into the MIDAS framework, enabling run control, slow control, and data storage in multiple formats. The system was successfully operated during the December 2025 test beam, where stable acquisition and detector performance were demonstrated. Selected results from laboratory measurements and beam tests will be presented.Speaker: Carlo Veo (Università di Pisa) -
11:05
152 Operations and Performance of the ATLAS Tile Calorimeter Phase-II Upgrade Demonstrator in Run 3 20m
The Tile Calorimeter (TileCal) is a sampling hadronic calorimeter that covers the central region of the ATLAS experiment at the Large Hadron Collider (LHC). The LHC will undergo a series of upgrades leading to the High-Luminosity LHC (HL-LHC). The TileCal Phase-II Upgrade will accommodate the detector readout electronics to meet the challenges of a 1 MHz trigger rate, higher ambient radiation levels, and increased pile-up conditions.
The TileCal Phase-II upgrade project has undertaken an extensive R&D program. The Demonstrator Phase-II Upgrade module was built in 2014 with the upgraded readout electronics and backward compatibility with the present ATLAS Trigger and Data Acquisition system. Its electronics were evaluated during seven test beam campaigns using the CERN SPS fixed target facility. To gain more experience with collision data, the Demonstrator Phase-II Upgrade module was inserted into the ATLAS experiment in 2019. This module operates under real detector conditions during Run-3 (2022–2026).
This contribution describes the hardware and software upgrades of the Demonstrator Phase-II Upgrade module and discusses the operations findings from this module within ATLAS, as well as the latest performance results.
Speaker: Danijela Bogavac (IFAE) -
11:05
153 Studies of track reconstruction performance in the ATLAS Event Filter for the HL-LHC 20m
The instantaneous luminosity at the High-Luminosity LHC (HL-LHC) will reach unprecedented levels, boosting the physics reach at the LHC. To cope with the resulting challenging pile-up condition and fully exploit the new high-granularity Inner Tracker (ITk), a major upgrade of the ATLAS Trigger and Data Acquisition (TDAQ) system is ongoing, with track reconstruction in the Event Filter being a critical component. Achieving an online tracking performance close to that of offline algorithms is essential to ensure a successful physics program at HL-LHC, providing the required trigger efficiency while maintaining sustainable trigger rates. Over the past years, an extensive R&D effort has been carried out to design a heterogeneous computing system, exploring possible integrations of CPU cores with GPU or FPGA accelerators at different stages of the tracking workflow, to identify the technology with the highest potential in terms of throughput, power consumption, cost, and tracking performance. This contribution will focus on the remarkable tracking performance achieved across the different technologies, demonstrating the strong potential of tracking at the Event Filter level.
Speaker: Marco Aparo (University of Sussex (GB)) -
11:05
162 A Timing Measurement Prototype for LGAD Using the LATRIC ASIC 20m
This paper presents the design and characterization of a readout prototype for Low-Gain Avalanche Diode (LGAD) sensors. Aimed at applications requiring excellent timing resolution, such as particle tracking in future high-energy physics experiments, the prototype addresses the key challenge of processing LGAD signals with fast rise times and moderate gain. The dedicated system integrates two application-specific integrated circuits (ASICs), named LATRIC0 (LGAD Timing Readout Integrated Circuit), on a compact printed circuit board. LATRIC0 is a single channel timing measurement chip that integrates a low-noise, high-bandwidth front-end amplifier, a fast comparator, and a high-precision time-to-digital converter (TDC). LGAD pixels with a size of 2.5 mm*2.5 mm are wire bonded on the board. Initial bench-top testing using a pulsed laser demonstrates that a coincident hit timing resolution of ~25 ps between two LATRIC chips, corresponding to a single LGAD+LATRIC timing resolution of ~17 ps. These results validate a scalable and cost-effective readout architecture and confirm its strong potential for integration into large-area LGAD-based timing detector systems.
Speakers: Chuanye Wang (Nanjing University), Xiongbo Yan (Institute of High Energy Physics) -
11:05
166 Toward Efficient Wireless Monitoring in Nuclear Facilities Based on Named Data Networking 20m
Nuclear facilities are gradually deploying large-scale wireless sensing systems to construct nuclear monitoring networks for continuous monitoring of equipment conditions and environmental parameters, such as Monitoring System of EAST's Nuclear Radiation. In such networks, sensing data is frequently queried and reused under dynamic wireless conditions, which places stringent requirements on data availability and delivery efficiency. Named Data Networking (NDN), with its data-centric communication paradigm and in-network caching, enables repeated data access through cache reuse and mitigates single-point failures caused by unstable links or node outages. However, when monitoring queries exhibit strong correlation and network conditions change frequently, existing NDN mechanisms suffer from inefficient cache utilization and redundant Interest transmissions. To address these challenges, this paper proposes an association-aware network-coded NDN (NC-NDN) framework with a distributed caching strategy tailored for nuclear sensing environments. By incorporating random linear network coding, data retrieval is shifted from packet-level access to content delivery based on linear network coding, enabling efficient multipath parallel transmission without relying on a single data source. Without altering the fundamental NDN communication paradigm, each node maintains a lightweight cache trail to capture historical request patterns and identify highly correlated coded blocks, enabling coordinated cache organization and adaptive in-network re-encoding. Experimental results demonstrate that the proposed framework achieves consistently lower data retrieval latency and hop count than representative baseline schemes, confirming its effectiveness in improving data delivery efficiency.
Speaker: Ziwen Lu (Tianjin University of Technology) -
11:05
173 Partial restart of a distributed data acquisition system 20m
Modern detector systems utilize numerous Front-End Electronics (FEEs). These FEEs can occasionally become unstable, requiring a restart. Furthermore, to improve the signal-to-noise ratio, FEEs are often mounted near the detector, placing them in the vicinity of the beam, where radiation-induced Single Event Upsets (SEUs) can cause malfunctions or shutdowns.
In conventional data acquisition (DAQ) procedures, when an FEE malfunctioned, the common practice was to stop the entire data acquisition, restart the problematic FEE, and then resume DAQ. If FEE malfunctions occur frequently, this procedure results in significant downtime, negatively impacting the statistical precision of the experiment.
To mitigate this, we implement a new procedure: stopping the readout from the troubled FEE, restarting the FEE, and then resuming its readout, all without stopping the overall data acquisition process.
A distributed DAQ system we are developmenting, called NestDAQ, performs data collection through the cooperation of numerous single-function processes. Among these processes, the TimeFrameBuilder (TFB) is responsible for collecting and consolidating data from all FEEs. This process detects FEEs from which data is not arriving and reports the anomaly to an online database. Another process for recovery monitors the database, controls the Sampler process which reads out the problematic FEE, commands it to stop readout, initiates the FEE restart, and controls the FEE's re-entry into the DAQ stream. This mechanism resolves FEE issues without stopping the DAQ, allowing data collection to continue uninterrupted.
We will explain the mechanism and implementation of this partial restart feature on NestDAQ.Speaker: Yoichi Igarashi (KEK) -
11:05
177 A 10-ps Resolution Direct-Digitizing Time Measurement ASIC for Fast Detectors 20m
In nuclear and particle physics experiments, high-precision time measurement is a core technology for applications such as Time-of-Flight (TOF). Fast detectors, represented by Multi-gap Resistive Plate Chambers (MRPCs), are widely used in large-scale and high-resolution detector systems due to their excellent timing performance. However, their weak and fast output signals, combined with the integration limitations of traditional architectures that separate the analog front-end from digital quantization, pose severe challenges to readout electronics. To address this, an 8-channel high-precision time measurement ASIC, featuring monolithic integration of the analog front-end and digital back-end, was designed and implemented in a 180 nm CMOS process. The chip integrates pre-amplifiers, discriminators, Time-to-Digital Converters (TDCs), and a Phase-Locked Loop (PLL). The pre-amplifier employs a Regulated Cascode (RGC) topology to achieve both high bandwidth and low noise, while providing a stable and low input impedance for matching with various readout applications. The discriminator utilizes a differential saturated amplification structure, featuring low noise and low jitter. The TDC is based on a two-step vernier architecture, effectively balancing timing resolution, power consumption, and dead time. Furthermore, an on-chip low-jitter PLL based on a Multiplying Delay-Locked Loop (MDLL) provides a high-stability reference clock for the TDC. Test results from the fabricated chip show a stable input impedance as low as 15 Ω and a timing precision of 10 ps with a 100 fC input charge, validating the capability of the proposed monolithic solution for high-precision readout.
Speaker: Zhuang Li (University of Science and Technology of China (CN)) -
11:05
179 Design of Prototype Readout Electronics of the High Counting Rate Main Drift Chamber in Particle Physics Experiments 20m
The main drift chamber (MDC) is a crucial component of large colliders. As a track detector, it requires excellent position resolution, which depends on high-precision charge and time measurements provided by the readout electronics. With the increase in the collider's center-of-mass luminosity, the counting rate of its main drift chamber also rises. Existing readout electronics systems for main drift chambers cannot address issues such as complex multi-peak structures, significant waveform inconsistencies, and waveform pile-up caused by prolonged signal duration under high counting rates. In this study, a prototype readout electronics system for high counting rate main drift chambers was designed. First, a simulation of the waveform detected by the high counting rate main drift chamber was conducted. Based on the features of the simulation waveforms, the front-end readout module (FEM) was designed. The FEM employs a transimpedance amplifier (TIA) to achieve distortion-free amplification of small signals. The data acquisition module (DAM) utilizes an analog-to-digital converter (ADC) and a field-programmable gate array (FPGA) to digitize the waveforms. It can digitize the multi-channel signals from the FEM, simultaneously perform charge and time measurements, and thereby calculate the dE/dx ,which is the energy loss information, enabling track detection. Test results from high-counting-rate MDC waveform simulation tests and joint detector tests show that within the dynamic range of 60–1800 fC, the system delivers a charge resolution better than 8 fC and a time resolution better than 1 ns, meeting the requirements for high-counting-rate main drift chambers.
Speaker: Yilin Ma (University of Science and Technology of China) -
11:05
180 Prototype Readout Electronics System for a LET Spectrometer in Space Radiation 20m
Space radiation poses a major threat during space missions. Linear energy transfer (LET) is widely used for quantitatively assessing the effects of space radiation. Space radiation is characterized by a heterogeneous composition of particle types, a broad energy spectrum, and temporal variability, making it difficult to achieve high-precision and large dynamic range real-time LET detection. In this paper,a prototype readout electronics system is proposed. The prototype system is designed for a LET spectrometer using a novel dynamic range extension method, achieving a large dynamic range and high measurement precision in real-time detection. This prototype system is developed for silicon telescopes composed of double-sided silicon strip detectors (DSSDs) of up to 3 layers. It contains 3 front-end electronics (FEE) modules, and 1 data acquisition module (DAM). Electrical tests show that the charge dynamic range is up to about 700 fC while the equivalent charge noise (ENC) of the prototype in high gain mode is less than 0.17 fC for all channels. The extended dynamic range is about 4000. To validate the LET measurement methodology, a neutron radiation field test was conducted at the China Institute of Atomic Energy. GEANT4-based simulations were performed to verify the correctness of the experimental results. The experimental results show good agreement with simulations.
Speaker: Mr Wenrui Sun (University of Science and Technology of China (CN)) -
11:05
190 A 14-bit 100 Ms/s Pipeline ADC for Silicon Pixel Sensor in Gaseous Detectors 20m
The Heavy Ion Research Facility in Lanzhou (HIRFL) and the future High-Intensity Heavy-ion Accelerator Facility (HIAF) are China's leading heavy-ion research centers. Gaseous detectors are widely used in experiments at HIRFL and HIAF because they are cost-effective and have a minimal material budget for tracking. To address the readout needs of high-count-rate, high-resolution gaseous detectors with multidimensional measurements in future experimental setups, a silicon pixel sensor has been proposed. This silicon pixel sensor, designed in a 130nm CMOS process, is expected to provide micrometer-level position resolution, energy measurement with noise of tens of electrons, and timing accuracy in the nanosecond range. Serving as the critical component of the silicon pixel sensor, a 14-bit, 100 Ms/s ADC converts analog signals representing time and energy from a region of pixels into digital signals. This 14-bit 100 Ms/s Pipeline ADC employs a fully differential architecture with a redundancy-correction algorithm and a SHA-less technique. It consists of a 3.5-bit first stage, four 2.5-bit stages, and a 3-bit Flash ADC, which are controlled by two non-overlapping clocks. The MDAC circuit uses switched-capacitor comparators and cascode operational amplifiers with gain boosting. The ADC covers an area of 2600 μm × 2460 μm, consumes 140 mW at a 1.2 V supply, and achieves an effective number of bits of 13.65. This paper will present the design and performance of this ADC.
Speaker: Haoqing Xie (Institute of Modern Physics, Chinese Academy of Sciences) -
11:05
194 Performance Test of the SAMIDARE Board with Mini-TPC using heavy-Ion Beams 20m
The SPADI Alliance is a next-generation streaming data acquisition developing platform, aiming at a common and scalable streaming readout system. As a part of SPADI task force, we are developing a waveform digitizer board SAMIDARE to promote future standardization for the large-scale TPC-based experiments, such as E16, HypTPC, SPiRIT, MAIKo, and CAT-M. In this study, as a first application example of the SAMIDARE-based DAQ system, we applied the prototype board to the active-target detector CAT-M. It has been developed for systematic measurements of the isoscalar giant monopole resonance (ISGMR), which provide constraints on the isospin-dependent incompressibility term $K_\tau$, an essential component of the nuclear matter equation of state. CAT-M consists of a compact beam-tracking TPC (Mini-TPC), a large TPC for recoil particle detection, and silicon strip detectors. Tracking and particle identification are performed using waveform information, enabling reaction reconstruction in inverse kinematics experiments under high-intensity heavy-ion beam irradiation. In this presentation, we report the results of a performance test of the SAMIDARE prototype board, conducted under heavy-ion beam irradiation using the Mini-TPC.
Speaker: Fumitaka ENDO (RIKEN Nishina Center) -
11:05
200 Design of a Novel Pipelined SAR ADC for Multi-channel Front-end ASICs 20m
Frontier physics explorations, such as the heavy-ion experiments at the High-Intensity heavy-ion Accelerator (HIAF) and NvDEx experiment at CJPL to search for neutrinoless double-beta decay, demand and thus propel the rapid development of new detector technologies. To address this demand, the next-generation multi-channel readout ASIC must integrate high-performance analog-to-digital converters as a core component. The ADC must deliver high precision and high speed to accurately capture signals, and operate within a tight power budget and a compact area to support high-density on-chip integration. In this paper, we have developed a novel 14-bit, 100 MS/s, pipelined SAR ADC in a 55 nm CMOS process. Two CDACs are used in stage-one's SAR ADC and MDAC to save power. Also, a novel MDAC structure that separates the CDAC from the amplifier inputs is adopted to improve the speed and gain error. The ADC added an extra reset phase to eliminate the memory effect of the CDAC in the stage-two SAR ADC. This paper will present the design and performance of this ADC.
Speakers: Mr Haoqing Xie (Institute of Modern Physics, Chinese Academy of Sciences), Prof. Chengxin Zhao (Jiangnan University) -
11:05
216 A Packaged Streaming-Readout Data Acquisition System by the SPADI Alliance for Nuclear and Particle Physics Experiments 20m
The rapid increase in beam intensities and detector granularity in modern
nuclear and particle physics experiments is pushing conventional
hardware-trigger-based data acquisition systems to their practical limits.
Streaming and triggerless readout architectures, in which detector signals are
continuously digitized and transferred without an explicit first-level
trigger, provide a promising alternative by shifting event selection and data
reduction to flexible software layers.The SPADI (Signal Processing and Data Acquisition Infrastructure) Alliance has
been established to develop a packaged streaming-readout data acquisition
system that integrates front-end electronics, readout and computing software,
and analysis frameworks into a deployable and experiment-agnostic solution.
The system adopts the AMANEQ board as front-end electronics, employs NestDAQ
for streaming data readout and processing, and utilizes the ROOT-based
analysis framework ARTEMIS for online and offline data analysis.
This contribution presents the concept and architecture of the SPADI packaged
streaming-readout data acquisition system and discusses its applicability to
nuclear and particle physics experiments.Speaker: Nobuyuki Kobayashi (Research Center for Nuclear Physics)
-
11:00
-
11:25
→
12:30
Poster session: Data Acquisition and Trigger Architectures
-
11:25
Partial restart of a distributed data acquisition system 20m
Modern detector systems utilize numerous Front-End Electronics (FEEs). These FEEs can occasionally become unstable, requiring a restart. Furthermore, to improve the signal-to-noise ratio, FEEs are often mounted near the detector, placing them in the vicinity of the beam, where radiation-induced Single Event Upsets (SEUs) can cause malfunctions or shutdowns.
In conventional data acquisition (DAQ) procedures, when an FEE malfunctioned, the common practice was to stop the entire data acquisition, restart the problematic FEE, and then resume DAQ. If FEE malfunctions occur frequently, this procedure results in significant downtime, negatively impacting the statistical precision of the experiment.
To mitigate this, we implement a new procedure: stopping the readout from the troubled FEE, restarting the FEE, and then resuming its readout, all without stopping the overall data acquisition process.
A distributed DAQ system we are developmenting, called NestDAQ, performs data collection through the cooperation of numerous single-function processes. Among these processes, the TimeFrameBuilder (TFB) is responsible for collecting and consolidating data from all FEEs. This process detects FEEs from which data is not arriving and reports the anomaly to an online database. Another process for recovery monitors the database, controls the Sampler process which reads out the problematic FEE, commands it to stop readout, initiates the FEE restart, and controls the FEE's re-entry into the DAQ stream. This mechanism resolves FEE issues without stopping the DAQ, allowing data collection to continue uninterrupted.
We will explain the mechanism and implementation of this partial restart feature on NestDAQ.Speaker: Yoichi Igarashi (KEK) -
11:45
A Packaged Streaming-Readout Data Acquisition System by the SPADI Alliance for Nuclear and Particle Physics Experiments 20m
The rapid increase in beam intensities and detector granularity in modern
nuclear and particle physics experiments is pushing conventional
hardware-trigger-based data acquisition systems to their practical limits.
Streaming and triggerless readout architectures, in which detector signals are
continuously digitized and transferred without an explicit first-level
trigger, provide a promising alternative by shifting event selection and data
reduction to flexible software layers.The SPADI (Signal Processing and Data Acquisition Infrastructure) Alliance has
been established to develop a packaged streaming-readout data acquisition
system that integrates front-end electronics, readout and computing software,
and analysis frameworks into a deployable and experiment-agnostic solution.
The system adopts the AMANEQ board as front-end electronics, employs NestDAQ
for streaming data readout and processing, and utilizes the ROOT-based
analysis framework ARTEMIS for online and offline data analysis.
This contribution presents the concept and architecture of the SPADI packaged
streaming-readout data acquisition system and discusses its applicability to
nuclear and particle physics experiments.Speaker: Nobuyuki Kobayashi (Research Center for Nuclear Physics) -
12:05
A Modular FPGA-Based Free-Running Architecture for PET DAQ in Intraoperative Imaging 20m
We present a compact and modular data acquisition and real-time processing architecture for silicon photomultiplier (SiPM)–based PET detector modules, developed with a focus on intraoperative imaging for surgical margin assessment. The proposed system combines multiplexing of a SiPM array with free-running high-speed analog-to-digital converters and fully digital on-FPGA signal processing. This approach significantly reduces channel count and analog front-end complexity while enabling deterministic, low-latency extraction of event energy, timestamp, and interaction position.
A 2×2 SiPM array coupled to a pixelated BGO scintillator is read out through an Anger-like multiplexing network, generating four position-encoded signals that are continuously digitized at 125 MSPS. All signal processing—baseline estimation, pulse detection, timing, energy estimation, and position reconstruction—is performed in real time on an FPGA, with event data streamed to the host with minimal latency. The architecture is inherently scalable and supports parallel operation of multiple detector modules with shared clock synchronization.
Experimental validation using flood-field irradiation with a 22-Na source demonstrates correct system operation and clear identification of scintillator crystals in reconstructed flood maps when waveform integration is used for energy estimation. The results highlight both the feasibility of SiPM multiplexing in fully digital PET readout chains and its sensitivity to SiPM gain dispersion and position-dependent pulse-shape effects. Overall, this work demonstrates a cost-efficient, flexible PET detector architecture well suited for real-time and intraoperative applications, while identifying key directions for further optimization of digital energy extraction and sensor uniformity compensation.Speaker: Giancarlo Sportelli (University of Pisa) -
12:05
A Real-Time Trigger and DAQ System for the MuEDM Experiment 20m
The Electrical Dipole Momentum (EDM) of fundamental particles are in-
timately connected to the violation of time invariance T and the combined
symmetry of charge and parity CP. The MuEDM experiment aims to measure
the muon EDM with enhanced sensitivity using, for the first time worldwide,the frozen spin technique by studying the up–down asymmetry of positronsfrom muon decay. A polarized muon beam from the Paul Scherrer Institute(PSI) is injected into a uniform magnetic field region, where muons are stored and observed. A fast and selective trigger system is a critical component of the experiment, responsible for identifying storable muons and activating the pulse kicker within a latency below 105 ns. The MuEDM Muon Trigger Detector (MTD) is based on an anti-coincidence scheme between Gate and Aperture plastic scintillators, optimized for high efficiency, compactness, and strong background rejection. The trigger concept has been validated in test beams performed in 2022 and 2024, showing more than 97% agreement between measurements and simulations. In parallel, the data acquisition system has been developed around the CAEN FERS A5202 electronics, providing high channel density, precise timing, integrated SiPM biasing, and triggered readout capabilities. The DAQ has been fully integrated into the MIDAS framework, enabling run control, slow control, and data storage in multiple formats. The system was successfully operated during the December 2025 test beam, where stable acquisition and detector performance were demonstrated. Selected results from laboratory measurements and beam tests will be presented.Speaker: Carlo Veo (Università di Pisa) -
12:05
A TDC based ASIC readout for SiPM matrices for TOF-PET application 20m
We present a scalable digital readout for SiPM matrices in TOF-PET using the HRFlexToT front-end ASIC and a multi-channel FPGA TDC. HRFlexToT delivers per-channel pulse encoding for joint time/energy extraction (linear ToT and Time+Energy PWM) and provides a fast LVDS timing-OR for flexible triggering. On the FPGA, a tapped-delay-line TDC with Nutt interpolation combines a coarse counter with calibrated fine time, where calibration is derived from histogram-based bin-width estimation and a characteristic curve mapping to picosecond timestamps. A Cyclone 10GX design-space exploration indicates feasible 64-channel implementations with ~50–100 ps rms timing precision with ~40% area occupation, 35.1 ps LSB and 40.2 ps rms precision.
Speaker: Nicola Belcari (Department of Physics, University of Pisa) -
12:05
An integrated biasing power supply, control and acquisition system for large Langmuir probe arrays in cold plasmas 20m
The work describes EPICA, the developed system which manages arrays of Langmuir probes for MITICA, ITER's full scale NBI prototype.
Speaker: Mr Mattia Bevilacqua (Università di Padova/Consorzio RFX) -
12:05
BESIII Trigger Fast Control System Upgrade 20m
The BESIII experiment, operating at the Beijing Electron–Positron Collider II (BEPCII), has delivered a broad range of significant physics results in the tau–charm energy region. The BESIII trigger system comprises fast event selection and a Fast Control System (FCS). As the central control infrastructure, the FCS integrates trigger timing, control logic, and interface modules. It consists of the Clock Fan-out Board (CLKF), Fast Control Board (FCTL), Fast Control Daughter Board (FCDB), Fast Control Signal Fan-out Board (FCSF), and Trigger Readout Control Board (TROC). Having been in stable operation for over 16 years, the original FCS was built on legacy hardware platforms and firmware toolchains that now face aging and scalability limitations. To address these challenges, this work presents a comprehensive upgrade of the FCS across hardware, firmware, and software. A new FCTL based on the AMD Kintex UltraScale FPGA and an updated CLKF have been developed, together with a firmware architecture upgrade of FCTL. The upgraded firmware maintains all legacy functionalities while introducing network-based data readout via the SiTCP protocol. Furthermore, EPICS-based software was developed to monitor SiTCP data, improving system integration and real-time control. Laboratory tests verify that the hardware, firmware, and software designs meet all performance requirements. The system is prepared for on-site deployment and subsequent operation within the BESIII experiment.
Speaker: xin cao (Institute of High Energy Physics) -
12:05
Data Processing Firmware of the Upstream Tracker for LHCb Run 3 20m
LHCb, one of the four main experiments at CERN, was upgraded for Run 3 to enable fully software-based triggering and data acquisition. A key component of this upgrade is the Upstream Tracker (UT), a silicon microstrip detector installed in early 2023. The UT comprises 968 silicon sensors and approximately 4192 SALT ASICs, which perform analog processing, digitization, common-mode subtraction, zero suppression, and data serialization. The resulting data streams are transmitted to the TELL40 back-end readout boards.
Each TELL40 board can be viewed, in simplified terms, as a set of optical fibre receivers connected to a high-performance Intel Agilex 10 FPGA, responsible for data processing and transfer to a PCIe interface.
This contribution focuses on the TELL40 FPGA gateware developed for the UT readout. It describes the challenges posed by the UT electronics architecture, data formats, high data rates and zero deadtime. It presents the adopted mitigation strategies, which reduced system complexity at the cost of specific trade-offs.
In addition to the core data acquisition functionality, several monitoring and control mechanisms were implemented within the Experiment Control System (ECS) and the Timing and Fast Control (TFC) system. The resulting architecture and the key design decisions are presented.
Finally, the development of a system with custom inputs and outputs required the application of dedicated design verification and debugging techniques, as well as the development of supporting software, which are also described.
Speaker: Carlos Abellan Beteta (University of Zurich (CH)) -
12:05
Design of a Full Trigger Data Readout Scheme for the BESIII MDC Sub-trigger System 20m
BESIII (Beijing Spectrometer III) is a large general-purpose detector operating at BEPCII (Beijing Electron–Positron Collider II). Since 2009, it has operated stably and delivered many representative physics results. In BESIII experiment, MDC (Main Drift Chamber) not only provides charged-particle tracking information, but also supplies key inputs to the Level-1 (L1) trigger.
Trigger algorithms in the MDC sub-trigger system are optimized during the design stage using simulation and are then kept stable for long-term running. Meanwhile, the DAQ readout of detector raw data strongly relies on the Level-1 Accept (L1A) signal generated by the trigger system. Under this architecture, the input information and intermediate data during the trigger process are difficult to systematically acquire, which has limited offline cross-check and iterative improvement of the trigger algorithms based on real running inputs, and therefore constrains further studies on trigger algorithms.
To address this need, we have developed a new generation of trigger electronics based on AMD Kintex UltraScale FPGAs, providing enhanced data transmission, buffering, and processing capabilities, together with an additional full-readout path for trigger data. We also design a dedicated readout frame format for trigger data readout and investigate transport solutions like RDMA (Remote Direct Memory Access). The upgraded hardware and firmware remain compatible with the existing system and don’t affect the critical trigger path.
This work explores a practical approach to introduce full readout of trigger data through hardware upgrades, providing a feasible hardware basis for future trigger-strategy studies.Speaker: Mr Haoxin Wang (Institute of High Energy Physics (IHEP), Chinese Academy of Sciences) -
12:05
FPGA based DAQ Readout Implementation for the CMS Electromagnetic Calorimeter at the High-Luminosity LHC 20m
The Phase-2 upgrade of the CMS detector at the High-Luminosity LHC (HL-LHC) will increase the overall throughput of the detector readout from 1.6 Tbps of the present-day system to 51 Tbps. This increment in data rate corresponds to an increase in the Level-1 trigger rate from 100 kHz to 750 kHz during HL-LHC. The increase in the latency and trigger rate requirements of the Phase-2 Level-1 trigger system requires the CMS Electromagnetic Calorimeter (ECAL) to shift the buffering of detector data and the generation of trigger primitives from the ASIC-based front-end electronics to a more flexible FPGA-based backend. A DAQ readout scheme is implemented on the Barrel Calorimeter Processor (BCP) board, which is based on the Xilinx XCVU13P FPGA. The firmware implementation is highly flexible and allows buffering of ECAL data for up to 12.8 $\mu$s and handling trigger rates as high as every LHC bunch-crossing (40 MHz). The firmware also provides flexibility to select a range of readout windows from as low as one bunch-crossing to as high as 256 bunch-crossings. It also provides the ability to handle back-pressure emerging from any source in the CMS DAQ system. To improve efficiency and relax the overall CMS DAQ, it can reduce the size of every triggered event dynamically using the Zero-Suppression (ZS) scheme. The design is tested on the BCP board using CMS-simulated data as well as from the prototyped ECAL ASIC front end. The proposed design and the test performance are summarized in the poster.
Speaker: Piyush Kumar (University of Notre Dame (US)) -
12:05
FPGA based RDMA for BEE Readout 20m
Traditional TCP/IP protocols rely heavily on CPU processing for tasks such as data packet encapsulation, parsing, and transfer, leading to substantial latency and resource consumption. In contrast, Remote Direct Memory Access (RDMA) enables direct data transfer between the network adapter and memory, bypassing the operating system kernel. This approach significantly reduces CPU overhead while delivering high bandwidth and ultra low latency, making it an efficient solution for high performance and data intensive applications.
This paper presents the design and implementation of an FPGA based RDMA protocol stack for the BEE readout system of the Circular Electron Positron Collider (CEPC). The implementation leverages the RoCEv2 protocol over standard Ethernet, utilizing the CMAC IP core for link and physical layer processing, a custom designed RDMA core for protocol operations, and DDR for data buffering. By enabling kernel-bypass and zero-copy communication, the system can substantially lower both communication overhead and latency in the CEPC BEE-to-DAQ data link, thereby supporting the deployment of advanced software-based high-level triggers (HLTs) under extreme event rates.
Speaker: Chang Xu (IHEP, UCAS) -
12:05
FPGA-Based Deep Learning Acceleration for Real-Time $z$-Vertex Reconstruction in STCF L1 Trigger 20m
The Super Tau-Charm Facility (STCF) experiment will operate at high instantaneous luminosity, placing stringent requirements on the real-time performance of its L1 trigger system. Fast and reliable reconstruction of the primary vertex position along the beam axis z is essential for effective background suppression and early event selection under fixed latency constraints.
We present a deep neural network for real-time z-vertex reconstruction designed for deployment in the STCF L1 trigger. The network operates on high-level features extracted from track segments reconstructed in the central drift chamber. For each track candidate, information from eight super-layers is used, including the local track angle, relative azimuthal angle, and drift time. A lightweight attention mechanism is applied to dynamically reweight contributions from different detector layers, followed by one-dimensional convolutional and fully connected layers to exploit geometric correlations while maintaining hardware efficiency.
The proposed models are trained using simulated STCF events and evaluated against offline reference vertices. A z-vertex resolution of approximately 5 cm is achieved, which is sufficient for trigger-level discrimination and remains stable across the studied kinematic range. The networks are optimized and implemented on FPGA using the hls4ml framework with quantization-aware training, enabling fixed-latency and fully pipelined inference.
A systematic design space exploration is performed to study the trade-offs between vertex resolution, inference latency, and FPGA resource utilization. The results demonstrate that the proposed architectures satisfy L1 trigger latency constraints while offering flexibility for further optimization. This work highlights the feasibility and potential of deep learning–based vertex reconstruction in real-time trigger systems.Speaker: Shuangshuang Zhang (Shandong University) -
12:05
Implementation of an Acquisition and Monitoring Module Based on the PTPv2 Protocol on EAST 20m
There are a large number of distributed control and monitoring devices in magnetic confinement fusion facilities. It is essential to conduct real-time monitoring of key parameters (such as temperature and voltage) of these devices, which is of great significance for ensuring the safe operation of the facilities and their associated systems.
This paper designs a set of monitoring equipment with Field Programmable Gate Array (FPGA) as the core based on the PTPv2 protocol. First, the equipment supports the PTPv2 protocol to ensure consistent timestamps among acquisition and monitoring devices. Second, it is equipped with multi-range voltage conditioning and wide-temperature-range temperature acquisition channels, which, combined with a high-resolution analog-to-digital conversion module, enable dynamic acquisition and data processing of weak signals. Third, the equipment features self-test and remote start-up functions, facilitating remote device management.
Test results show that the synchronization accuracy between devices is better than 50 ns; the voltage measurement accuracy is better than 0.04% F.S. in the low range (0–1 V) and better than 0.02% F.S. in the high range (1–10 V); within the temperature range of 20 K to 325 K, the acquisition error does not exceed 0.13 K. All these indicators meet the design requirements.Speaker: Zuchao Zhang (Institute of Plasma Physics,Chinese Academy of Sciences) -
12:05
Light output parameterization and calibration of the CsI and LYSO scintillators operated at ACCULINNA-2 for detection of ions with Z$<$5 20m
The ACCULINNA-2 fragment separator at the Flerov Laboratory of Nuclear Reactions (FLNR, JINR) is developed to study the properties of light exotic nuclei at the boundaries of nuclear stability in reactions with beams of light radioactive ions up to energies of 50 AMeV. Multilayer $\Delta E$-$E$ telescopes are often used for identifying reaction products and measuring their momenta. Heavy inorganic scintillators typically form the $E$ layer, where the energy deposited by the fully stopped ions should be measured. In our setup, arrays of CsI(Tl) and LYSO scintillation crystals are employed. The light output per 1 MeV of energy loss is not constant for both CsI(Tl) and LYSO and depends on the charge, mass, and energy of the ion. Parameterization of the light output and energy calibration of the scintillators is necessary for data processing. The data acquired in the experiment devoted to the study of cluster structures in neutron-rich $^{10,12}\mathrm{Be}$ via $^{10,12}\mathrm{Be}(d,^6\mathrm{Li})^{6,8}\mathrm{He}$ transfer reactions in inverse kinematics were used for characterization of the CsI(Tl) and LYSO scintillation detectors. The methods developed for measuring the thickness of the $\Delta E$ layer by analyzing the elastic scattering peak of a beam ion, as well as for separation of ion species on a $\Delta E$-$E$ identification plot, will be reported. Additionally, the results of light output parameterization and calibration for the CsI(Tl) and LYSO scintillators will be presented. The methods and results obtained here are applicable for many other experiments where the CsI and LYSO scintillators can be used.
Speaker: Ms A. Mai (Flerov Laboratory of Nuclear Reactions, JINR, 141980 Dubna, Russia) -
12:05
Modular Ground Penetrating Radar Tomography System with a Combination of Star and Daisy Chain DAQ and Trigger Topologies with Picosecond Accuracy 20m
The AgraSim experiment at Forschungszentrum Jülich is a unique laboratory for research on the impact of climate conditions on agricultural ecosystems and for the optimization of climate models. For the high-resolution 3D tomography of soil parameters, we develop a modular Ground Penetrating Radar (GPR) monitoring system with 39 DAQ modules, each containing 64 antennas. The modules will be mounted around a soil-filled lysimeter with a height of 1.5 m and a diameter of 1 m. The GPR emits a 3 ns long time domain pulse generated by one of its modules DACs. The signal is guided through a 1-to-64 multiplexer to one antenna, penetrates through the soil and is received by another antenna at the other side of the lysimeter. The received signal is amplified and multiplexed to one of eight ADCs of the corresponding DAQ module. This is done for all antenna combinations. All DAQ modules are controlled, triggered and synchronized by a central main module that also collects the measurement data from the modules and forwards them to an external storage. The topology of the DAQ system is a mixture of a star topology for the first 13 modules followed by a daisy chain topology to connect two additional DAQ modules to each of the first modules. A full tomogram will take about 8 s and contains up to 25 GB of data. Initial evaluations show a synchronisation accuracy of about 40 ps. A system with two DAQ modules was recently tested at a real lysimeter.
Speaker: Dr Achim Mester (Forschungszentrum Jülich GmbH, ITE) -
12:05
Mu3e Online Event Selection on GPU 20m
The Mu3e experiment aims to search for the Charged Lepton Flavor Violation (cLFV) through the rare decay $\mu^{+}\rightarrow e^{+}e^{-}e^{+}$, targeting a branch ratio sensitivity of 10$^{-15}$ using the PSI piE5 beamline in Phase I.
To cope with the exceptionally high muon rate of 10$^{8}$/s (equivalent to $\sim$80 Gbps raw data rate), a triggerless, GPU-based online event selection algorithm is implemented on the computing farm to reconstruct tracks and vertices in real time and identify the Mu3e signal candidate, thereby reducing the data rate by two orders of magnitude.Speaker: Chen Xie (ETH Zurich (CH)) -
12:05
Recent Results from CHILLAX Xenon-Doped Argon R&D 20m
Low-background noble liquid Time Projection Chambers (TPCs) are commonly used in particle physics for dark matter, neutrinoless double beta decay, Coherent Elastic Neutrino-Nucleus Scattering (CEvNS), and other rare event searches. Xenon and argon are the two most typical media for this technology, each conferring unique advantages- argon is inexpensive and therefore more easily scalable, whereas xenon’s scintillation light is longer in wavelength and therefore more easily detectable with commercial photodetectors and is less subject to attenuation in the liquid medium. While experiments have primarily leveraged either element, argon doped with xenon at the ~10 ppm-to-percent level is becoming increasingly attractive for use in rare event searches, preserving some attractive properties of both media to produce xenon-like scintillation signals at argon-like costs. The CHILLAX experiment leverages a mixed-medium TPC to characterize the electroluminescence properties of this doped medium as a function of xenon concentration and has achieved the highest concentration of xenon in liquid argon to date at 5%. We discuss the signal generation and collection architecture behind recent CHILLAX results and how these signals are analyzed to study the effect of xenon concentration on argon gas electroluminescence. We further address improvements to the light readout system and liquid level meter and consider their impacts on overall detector sensitivity.
Speaker: Dr Adam Tidball (University of California, Davis) -
12:05
Studies of track reconstruction performance in the ATLAS Event Filter for the HL-LHC 20m
The instantaneous luminosity at the High-Luminosity LHC (HL-LHC) will reach unprecedented levels, boosting the physics reach at the LHC. To cope with the resulting challenging pile-up condition and fully exploit the new high-granularity Inner Tracker (ITk), a major upgrade of the ATLAS Trigger and Data Acquisition (TDAQ) system is ongoing, with track reconstruction in the Event Filter being a critical component. Achieving an online tracking performance close to that of offline algorithms is essential to ensure a successful physics program at HL-LHC, providing the required trigger efficiency while maintaining sustainable trigger rates. Over the past years, an extensive R&D effort has been carried out to design a heterogeneous computing system, exploring possible integrations of CPU cores with GPU or FPGA accelerators at different stages of the tracking workflow, to identify the technology with the highest potential in terms of throughput, power consumption, cost, and tracking performance. This contribution will focus on the remarkable tracking performance achieved across the different technologies, demonstrating the strong potential of tracking at the Event Filter level.
Speaker: Marco Aparo (University of Sussex (GB)) -
12:05
Supernova processing implementation of JUNO DAQ 20m
The JUNO DAQ supernova data processing system establishes a complete, automated, and highly reliable data processing pipeline. This system is crucial for ensuring the automatic and integral processing of data in the event of a supernova burst.
The workflow begins with a highly available alert service, which consolidates alerts from multiple sources and generates a unique supernova event identifier (SN-ID), laying the foundation for subsequent analysis. For data processing, state management based on databases ensures reliability, while a containerized microservices architecture enhances service availability. The entire system operates on a Kubernetes-based container orchestration platform. This enables dynamic scheduling and elastic scaling of computing tasks within the resource-limited on-site environment. It maximizes the utilization of onsite computing potential while guaranteeing the absolute priority of regular data acquisition tasks, achieving an optimal balance between resource efficiency and operational reliability.Speaker: Yimou Xiang (中国科学院高能物理研究所)
-
11:25
-
11:25
→
12:30
Poster session: Front-End Electronics, Fast Digitizers, Fast Transfer Links & Networks
-
11:25
Design of Prototype Readout Electronics of the High Counting Rate Main Drift Chamber in Particle Physics Experiments 20m
The main drift chamber (MDC) is a crucial component of large colliders. As a track detector, it requires excellent position resolution, which depends on high-precision charge and time measurements provided by the readout electronics. With the increase in the collider's center-of-mass luminosity, the counting rate of its main drift chamber also rises. Existing readout electronics systems for main drift chambers cannot address issues such as complex multi-peak structures, significant waveform inconsistencies, and waveform pile-up caused by prolonged signal duration under high counting rates. In this study, a prototype readout electronics system for high counting rate main drift chambers was designed. First, a simulation of the waveform detected by the high counting rate main drift chamber was conducted. Based on the features of the simulation waveforms, the front-end readout module (FEM) was designed. The FEM employs a transimpedance amplifier (TIA) to achieve distortion-free amplification of small signals. The data acquisition module (DAM) utilizes an analog-to-digital converter (ADC) and a field-programmable gate array (FPGA) to digitize the waveforms. It can digitize the multi-channel signals from the FEM, simultaneously perform charge and time measurements, and thereby calculate the dE/dx ,which is the energy loss information, enabling track detection. Test results from high-counting-rate MDC waveform simulation tests and joint detector tests show that within the dynamic range of 60–1800 fC, the system delivers a charge resolution better than 8 fC and a time resolution better than 1 ns, meeting the requirements for high-counting-rate main drift chambers.
Speaker: Yilin Ma (University of Science and Technology of China) -
11:45
A Timing Measurement Prototype for LGAD Using the LATRIC ASIC 20m
This paper presents the design and characterization of a readout prototype for Low-Gain Avalanche Diode (LGAD) sensors. Aimed at applications requiring excellent timing resolution, such as particle tracking in future high-energy physics experiments, the prototype addresses the key challenge of processing LGAD signals with fast rise times and moderate gain. The dedicated system integrates two application-specific integrated circuits (ASICs), named LATRIC0 (LGAD Timing Readout Integrated Circuit), on a compact printed circuit board. LATRIC0 is a single channel timing measurement chip that integrates a low-noise, high-bandwidth front-end amplifier, a fast comparator, and a high-precision time-to-digital converter (TDC). LGAD pixels with a size of 2.5 mm*2.5 mm are wire bonded on the board. Initial bench-top testing using a pulsed laser demonstrates that a coincident hit timing resolution of ~25 ps between two LATRIC chips, corresponding to a single LGAD+LATRIC timing resolution of ~17 ps. These results validate a scalable and cost-effective readout architecture and confirm its strong potential for integration into large-area LGAD-based timing detector systems.
Speakers: Chuanye Wang (Nanjing University), Xiongbo Yan (Institute of High Energy Physics) -
12:05
A 14-bit 100 Ms/s Pipeline ADC for Silicon Pixel Sensor in Gaseous Detectors 20m
The Heavy Ion Research Facility in Lanzhou (HIRFL) and the future High-Intensity Heavy-ion Accelerator Facility (HIAF) are China's leading heavy-ion research centers. Gaseous detectors are widely used in experiments at HIRFL and HIAF because they are cost-effective and have a minimal material budget for tracking. To address the readout needs of high-count-rate, high-resolution gaseous detectors with multidimensional measurements in future experimental setups, a silicon pixel sensor has been proposed. This silicon pixel sensor, designed in a 130nm CMOS process, is expected to provide micrometer-level position resolution, energy measurement with noise of tens of electrons, and timing accuracy in the nanosecond range. Serving as the critical component of the silicon pixel sensor, a 14-bit, 100 Ms/s ADC converts analog signals representing time and energy from a region of pixels into digital signals. This 14-bit 100 Ms/s Pipeline ADC employs a fully differential architecture with a redundancy-correction algorithm and a SHA-less technique. It consists of a 3.5-bit first stage, four 2.5-bit stages, and a 3-bit Flash ADC, which are controlled by two non-overlapping clocks. The MDAC circuit uses switched-capacitor comparators and cascode operational amplifiers with gain boosting. The ADC covers an area of 2600 μm × 2460 μm, consumes 140 mW at a 1.2 V supply, and achieves an effective number of bits of 13.65. This paper will present the design and performance of this ADC.
Speaker: Haoqing Xie (Institute of Modern Physics, Chinese Academy of Sciences) -
12:05
A Full on-Chip LDO Regulator with self-adapting compensation networks for Front-end Readout Circuit 20m
Cadmium Zinc Telluride (CdZnTe) detectors are widely used in high-energy physics, space detection, and nuclear medical imaging, due to their high energy resolution for gamma rays and operating in room temperature. In order to supply clean power to a 32-channel front-end readout circuit for CdZnTe detectors, a fully integrated low dropout (LDO) voltage regulator is proposed in this paper.
Since output pole varies with the load current and external frequency compensating components are not allowed, it is difficult to keep LDO stable. In order to address this problem, a novel frequency compensation method based on load-current partition is proposed. The topology of compensation circuit can be adjusted by sensing the load current. Thus, the compensating zero can well track the output pole in the full range of load current without sacrificing loop-gain bandwidth or increasing die area. Moreover, feedforward ripple cancellation is introduced to enhance the power supply rejection (PSR).
The proposed LDO has been designed and fabricated in 0.18 μm CMOS Technology, occuping an area of 120×264 μm2. Simulation results show that the phase margin is greater than 53° at the load capacitance of 200 nF and the load current ranging from 0 to 200 mA. Measured load regulation and line regulation are 77 mV/A and 4.09 mV/V, respectively. Measured PSR is -21 dB at 1 MHz. The maximum power efficiency is 97.8% and the minimum dropout voltage is 70 mV. For a 150 mA load current step, the transient response time is less than 5.5 μs.Speakers: Mr xiayu wang (Northwestern Polytechnical University), xiayu wang (Northwestern Polytechnical University) -
12:05
A Modular MPSoC Platform for Real-time Data Processing for Climate Observation Instruments 20m
The increasing need for miniaturised climate observation instruments on stratospheric balloons and research
aircraft, combined with increasingly complex measurement tasks, higher data rates, and the growing preva-
lence of low-cost nanosatellite missions, drives the need for a new generation of reliable control and processing
units. These units must minimize mass, volume, and power consumption while supporting real-time data
processing, reduction, and compression.
A modular and programmable data acquisition and processing platform was developed. This platform can
handle two different sensors and provides the basis for real-time preprocessing of the captured sensor data
with subsequent data reduction, e.g., through pixel binning and/or data compression. In a first application,
the preprocessing of 2D infrared detector data from a Michelson interferometer was implemented. From
images of 48x128 pixels and a frame rate of approximately 5000 fps, more than 6400 interferograms must be
processed in parallel. In a first step, level 0 processing - consisting of non-linearity correction, spectral off-axis
calibration, and resampling of the data - was implemented in VHDL. The optimal parameters for the interpo-
lation kernel, the abscissa calculation, and the data formats were analyzed in advance using a Python model.
This contribution highlights the parameter analysis, functional VHDL blocks, and resource utilization. Initial
results demonstrate the accuracy and efficiency of the VHDL implementation, confirming the platform’s
suitability for high-speed, real-time preprocessing in compact, low-power environments.Speaker: Georg Schardt (ITE) -
12:05
A Multi-Mode Waveform Digitization System Based on Switched Capacitor Array Chips 20m
Waveform digitization directly samples detector analog signals and extracts timing and amplitude information via digital signal processing, and is widely used in readout electronics for nuclear and particle physics experiments. Switched Capacitor Array (SCA) architectures combine high-speed analog sampling with low-speed analog-to-digital converters (ADCs), offering advantages in power consumption, integration level, and achievable sampling rate compared with ultra-high-speed ADC-based solutions.
This work proposes a configurable waveform digitization architecture based on cascading multiple custom-developed developed SCA chips. For a two-chip configuration, the input signal is equally split by a wideband passive power divider and fed into two SCA chips. A phase-locked loop (PLL) controls the sampling clock phases, while FPGA-based trigger logic enables multiple operating modes. Time-interleaved sampling with a 180° phase offset achieves high sampling rates, waveform concatenation extends sampling depth, and alternating trigger allocation improves event rate.
Based on this concept, a multi-mode waveform digitization prototype was designed and implemented. Laboratory measurements demonstrate sampling rates of up to 10 Gsps, a continuous sampling window of approximately 100 ns, and an event processing capability of about 100 kHz. Further joint tests with a Picosecond Micromegas detector validate a 10 Gsps effective sampling rate and achieve a timing resolution better than 26 ps in cross-chip operation.Speaker: Dingjun Li (University of Science and Technology of China) -
12:05
Design of a Novel Pipelined SAR ADC for Multi-channel Front-end ASICs 20m
Frontier physics explorations, such as the heavy-ion experiments at the High-Intensity heavy-ion Accelerator (HIAF) and NvDEx experiment at CJPL to search for neutrinoless double-beta decay, demand and thus propel the rapid development of new detector technologies. To address this demand, the next-generation multi-channel readout ASIC must integrate high-performance analog-to-digital converters as a core component. The ADC must deliver high precision and high speed to accurately capture signals, and operate within a tight power budget and a compact area to support high-density on-chip integration. In this paper, we have developed a novel 14-bit, 100 MS/s, pipelined SAR ADC in a 55 nm CMOS process. Two CDACs are used in stage-one's SAR ADC and MDAC to save power. Also, a novel MDAC structure that separates the CDAC from the amplifier inputs is adopted to improve the speed and gain error. The ADC added an extra reset phase to eliminate the memory effect of the CDAC in the stage-two SAR ADC. This paper will present the design and performance of this ADC.
Speakers: Mr Haoqing Xie (Institute of Modern Physics, Chinese Academy of Sciences), Prof. Chengxin Zhao (Jiangnan University) -
12:05
Low-Noise Room-Temperature Readout Electronics for TDM-SQUID TES Arrays in the AliCPT-40G Telescope 20m
Abstract: Time-division multiplexing (TDM) based on direct current superconducting quantum interference devices (DC-SQUIDs) is a key readout technology for large transition-edge sensor (TES) arrays used in cosmic microwave background (CMB) experiments. The AliCPT-40G telescope plans to deploy a large-scale TES array operating at 40 GHz, which requires a low-noise, compact, and high-bandwidth TDM readout electronics system. In TDM system, the room-temperature electronics are a major noise contributor beyond the intrinsic TES noise, and their performance directly affects the energy resolution of the detectors.In this work, we present the design and implementation of a low-noise room-temperature readout electronics system for a TDM-SQUID architecture. The system integrates TES and SQUID bias sources, flux-locked loop (FLL) readout, digital multiplexing control, and high-speed data acquisition. Based on measured parameters of a two-stage TDM SQUID operated at cryogenic temperature, design constraints on noise, bandwidth, and slew rate are derived for the room-temperature electronics. The bias sources achieve equivalent current noise densities below 0.1 pA/√Hz for TES bias and below 0.26 pA/√Hz for SQUID bias. The FLL readout electronics provide a closed-loop bandwidth of up to 1 MHz and a slew rate of 0.5 Φ₀/µs.The digitized readout system is based on multi-channel ADC/DAC devices with JESD204B serial interfaces, significantly reducing system size. The effective resolution exceeds 11.5 bits. The system supports real-time data transmission at 2 Gbps per channel. This work demonstrates a complete TDM room-temperature electronics solution suitable for AliCPT-40G telescope and large TES arrays.
Keywords: Cryogenics; Data acquisition circuits; Digital electronic circuits; Instrumental noiseSpeakers: Mr Tangchong Kuang (SHANDONG University), Prof. Xiangxiang Ren (SHANDONG University) -
12:05
Operations and Performance of the ATLAS Tile Calorimeter Phase-II Upgrade Demonstrator in Run 3 20m
The Tile Calorimeter (TileCal) is a sampling hadronic calorimeter that covers the central region of the ATLAS experiment at the Large Hadron Collider (LHC). The LHC will undergo a series of upgrades leading to the High-Luminosity LHC (HL-LHC). The TileCal Phase-II Upgrade will accommodate the detector readout electronics to meet the challenges of a 1 MHz trigger rate, higher ambient radiation levels, and increased pile-up conditions.
The TileCal Phase-II upgrade project has undertaken an extensive R&D program. The Demonstrator Phase-II Upgrade module was built in 2014 with the upgraded readout electronics and backward compatibility with the present ATLAS Trigger and Data Acquisition system. Its electronics were evaluated during seven test beam campaigns using the CERN SPS fixed target facility. To gain more experience with collision data, the Demonstrator Phase-II Upgrade module was inserted into the ATLAS experiment in 2019. This module operates under real detector conditions during Run-3 (2022–2026).
This contribution describes the hardware and software upgrades of the Demonstrator Phase-II Upgrade module and discusses the operations findings from this module within ATLAS, as well as the latest performance results.
Speaker: Fernando Carrio Argos (Instituto de Física Corpuscular (CSIC-UV)) -
12:05
Performance Test of the SAMIDARE Board with Mini-TPC using heavy-Ion Beams 20m
The SPADI Alliance is a next-generation streaming data acquisition developing platform, aiming at a common and scalable streaming readout system. As a part of SPADI task force, we are developing a waveform digitizer board SAMIDARE to promote future standardization for the large-scale TPC-based experiments, such as E16, HypTPC, SPiRIT, MAIKo, and CAT-M. In this study, as a first application example of the SAMIDARE-based DAQ system, we applied the prototype board to the active-target detector CAT-M. It has been developed for systematic measurements of the isoscalar giant monopole resonance (ISGMR), which provide constraints on the isospin-dependent incompressibility term $K_\tau$, an essential component of the nuclear matter equation of state. CAT-M consists of a compact beam-tracking TPC (Mini-TPC), a large TPC for recoil particle detection, and silicon strip detectors. Tracking and particle identification are performed using waveform information, enabling reaction reconstruction in inverse kinematics experiments under high-intensity heavy-ion beam irradiation. In this presentation, we report the results of a performance test of the SAMIDARE prototype board, conducted under heavy-ion beam irradiation using the Mini-TPC.
Speaker: Fumitaka ENDO (RIKEN Nishina Center) -
12:05
Prototype Readout Electronics System for a LET Spectrometer in Space Radiation 20m
Space radiation poses a major threat during space missions. Linear energy transfer (LET) is widely used for quantitatively assessing the effects of space radiation. Space radiation is characterized by a heterogeneous composition of particle types, a broad energy spectrum, and temporal variability, making it difficult to achieve high-precision and large dynamic range real-time LET detection. In this paper,a prototype readout electronics system is proposed. The prototype system is designed for a LET spectrometer using a novel dynamic range extension method, achieving a large dynamic range and high measurement precision in real-time detection. This prototype system is developed for silicon telescopes composed of double-sided silicon strip detectors (DSSDs) of up to 3 layers. It contains 3 front-end electronics (FEE) modules, and 1 data acquisition module (DAM). Electrical tests show that the charge dynamic range is up to about 700 fC while the equivalent charge noise (ENC) of the prototype in high gain mode is less than 0.17 fC for all channels. The extended dynamic range is about 4000. To validate the LET measurement methodology, a neutron radiation field test was conducted at the China Institute of Atomic Energy. GEANT4-based simulations were performed to verify the correctness of the experimental results. The experimental results show good agreement with simulations.
Speaker: Mr Wenrui Sun (University of Science and Technology of China (CN)) -
12:05
The μMUX readout electronic system of the AliCPT 20m
To meet the electronic readout requirements of Transition Edge Sensor (TES) arrays in the Ali CMB Polarization Telescope (AliCPT) experiment, under conditions of high channel count, wide bandwidth, high-speed readout, and long-term stability, a Microwave Superconducting Quantum Interference Device (SQUID) Multiplexer (μMUX) readout electronics system has been designed and implemented. The system is composed of a control board, a Radio Frequency (RF) board, and the ZCU111 development board, covering a continuous readout bandwidth of 0 to 4.096 GHz. It enables excitation, up-down frequency conversion, digitization, and real-time processing of μMUX resonator arrays. The system achieves up-down frequency conversion and amplitude adjustment of RF signals through the RF board. The high-speed DACs and ADCs integrated on the ZCU111 development board are responsible for generating and collecting a comb of probe tones, and the FPGA implements digital down frequency conversion, channelization and signal demodulation. The readout signals are channelized into independent 2 MHz-bandwidth using the Polyphase Filter Bank (PFB), enabling parallel readout of multiple resonators. Based on the complete electronic readout chain, noise analysis is performed on the system output, evaluating the power spectral density characteristics and the level of electronic noise of the signals after channelization. These results provide experimental evidence for performance optimization of the μMUX readout system and the application of large-scale TES arrays.
Speakers: Prof. Xiangxiang Ren (Shandong University), Zhekai Cheng (Shandong University) -
12:05
Toward Efficient Wireless Monitoring in Nuclear Facilities Based on Named Data Networking 20m
Nuclear facilities are gradually deploying large-scale wireless sensing systems to construct nuclear monitoring networks for continuous monitoring of equipment conditions and environmental parameters, such as Monitoring System of EAST's Nuclear Radiation. In such networks, sensing data is frequently queried and reused under dynamic wireless conditions, which places stringent requirements on data availability and delivery efficiency. Named Data Networking (NDN), with its data-centric communication paradigm and in-network caching, enables repeated data access through cache reuse and mitigates single-point failures caused by unstable links or node outages. However, when monitoring queries exhibit strong correlation and network conditions change frequently, existing NDN mechanisms suffer from inefficient cache utilization and redundant Interest transmissions. To address these challenges, this paper proposes an association-aware network-coded NDN (NC-NDN) framework with a distributed caching strategy tailored for nuclear sensing environments. By incorporating random linear network coding, data retrieval is shifted from packet-level access to content delivery based on linear network coding, enabling efficient multipath parallel transmission without relying on a single data source. Without altering the fundamental NDN communication paradigm, each node maintains a lightweight cache trail to capture historical request patterns and identify highly correlated coded blocks, enabling coordinated cache organization and adaptive in-network re-encoding. Experimental results demonstrate that the proposed framework achieves consistently lower data retrieval latency and hop count than representative baseline schemes, confirming its effectiveness in improving data delivery efficiency.
Speaker: Kai Shi (Tianjin University of Technology)
-
11:25
-
12:30
→
15:00
Lunch break 2h 30m
-
15:00
→
17:20
Data Acquisition and Trigger Architectures Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
-
15:00
A Real Time Archiving Framework from EPICS to Time Series Databases for Fusion Plant Data 20m
Superconducting tokamak fusion facilities continuously generate large volumes of plant data that must be archived reliably and queried efficiently to support system monitoring, fault analysis, and long-term operation. Currently, fusion devices widely adopt the EPICS (Experimental Physics and Industrial Control System) architecture, and plant data are typically archived in files or relational databases. However, these approaches exhibit limited query performance when handling large-scale time-series plant data, making it essential to archive plant data into TSDB (Time Series Databases), which have been demonstrated to deliver excellent query performance when storing massive time-series plant data. To remedy this problem, we propose the EPICS–TSDB Real-Time Archiving Framework (ETRA). Specifically, our approach archives EPICS process variables (PVs) into the TSDB backend in real time. By leveraging TSDB-native data organization and indexing mechanisms, it significantly improves query efficiency for large-scale plant data, alleviating the access performance limitations of relational databases and file-based formats. Furthermore, the ETRA provides an AI model interface. This enables the archived plant data to be used directly for machine-learning inference without moving the data to a separate ML service platform, thus accelerating data processing. ETRA has been fully designed and functionally implemented, and it is planned for deployment on EAST (Experimental Advanced Superconducting Tokamak).
Speaker: Dr Guang Yang (University of Science and Technology of China) -
15:20
Repurposing acquisition devices into trigger-based timing synchronization of break-down events during MITICA high voltage holding experiments 20m
A critical requirement for MITICA — a full-scale prototype of the heating Neutral Beam Injectors hosted at the Consorzio RFX Neutral Beam Test Facility for the ITER experiment — is the capability to withstand a continuous voltage of 1MV across the vacuum gaps insulating the beam source from the grounded vessel. To validate this feature, a dedicated voltage-holding test campaign was conducted throughout 2024 and 2025 using a full-scale mock-up of the beam source. Tests involved an accurate characterization of the associated breakdown events: vacuum dielectric failures which result in rapid potential drops and generate strong current discharges.
In order to observe such phenomena, several current probes were deployed at different key sites of the plant and along the ${\sim}150$m transmission line, acquiring the instantaneous current flowing through all components. To guarantee noise immunity and reliable electrical insulation, battery-powered transient acquisition devices were used; however, this approach limited the options for implementing a precise measurement of the relative delays among acquired signals.
This contribution presents the solution adopted: a time reconstruction architecture based on cost-effective embedded RedPitaya (Zynq-7000 FPGA) devices repurposed as "timing-hubs", which function as configurable trigger multiplexers, capturing trigger signals as transients to facilitate the offline time reconstruction of event sequences. The system features a self-calibration mechanism that measures signal time-of-flight along optical fibers, generating delay offsets to synchronize acquired waveforms across a sparse, connected-graph topology of both acquisition devices and other hubs.
The results obtained encourage to consider this uncomplicated repurposing methodology suitable for similar challenging applications.Speaker: Andrea Rigoni Garola (RFX) -
15:40
Design and Evaluation of DAQ Architectures for Prompt-Gamma Timing in Particle Therapy 20m
Prompt-Gamma Timing (PGT) for ion-beam range verification requires sub-nanosecond timing resolution, high duty cycle, and operation at particle rates relevant for clinical applications. In this study, successive DAQ and trigger architectures were implemented and evaluated, focusing on efficiency, scalability, and timing performance.The first setup relied on waveform digitizers coupled to silicon and scintillation detectors. Although accurate Time-of-Arrival (TOA) information could be extracted, the system suffered from a very low duty cycle (~0.4%) and a limited number of channels. To overcome these limitations, a second architecture adopted a TDC-based approach using the CERN PicoTDC combined with constant-fraction discriminators and a silicon strip front-end. This configuration enabled high-rate operation with high duty cycle and acquisition times of a few seconds, at the cost of increased system complexity and limited timing resolution for secondary radiation. The third setup employed the CAEN DT5203 PicoTDC in analogue configuration, reducing component count and enabling combined TOA and Time-over-Threshold (TOT) measurements at rates compatible with therapeutic beams. This architecture achieved TOA resolutions of a few tens of picoseconds for primary particles and approximately 250 ps for secondary detection. Experimental results demonstrate clear reconstruction of beam time structures and prompt-gamma timing peaks, confirming the feasibility of PGT under realistic irradiation conditions. However, limitations related to channel buffer saturation and restricted scalability for secondary detectors were observed. A fourth architecture is currently under development, aiming to integrate dual PicoTDC units to decouple primary and secondary acquisition. Ongoing studies focus on buffer occupancy, dead-time effects, and saturation at high rates.
Speaker: Felix Mas Milian (INFN Torino, UESC) -
16:00
Fast ML on FPGA for Particle Identification and Tracking 20m
Real-time data processing is a frontier field in experimental particle physics.
Machine learning methods are widely used and have proven highly effective in particle physics.
The increasing computing power of modern FPGAs allows for the addition of more sophisticated algorithms for real-time data processing.
Many tasks can be solved using modern machine learning (ML) algorithms, which are naturally suited to FPGA architectures.
An FPGA-based machine learning algorithm provides extremely low , sub-microsecond, decision latency, and makes information-rich datasets for event selection.
The project includes the development of a Machine Learning algorithm based on FPGAs for real-time particle identification and tracking in a Transition Radiation Detector and an Electromagnetic Calorimeter.
This report describes the progress in developing the ML-FPGA system and the results of beam tests.Speaker: Sergey Furletov (Jefferson Lab, (US)) -
16:20
Online Data Reduction for the ePIC dRICH Using a Multi-FPGA Neural Network 20m
The ePIC detector at the Electron-Ion Collider (EIC) includes a dual-radiator Ring Imaging Cherenkov sub-detector (dRICH) in its forward region providing particle identification capabilities over a wide momentum range. The system is partitioned into six sectors, each instrumented with ∼53k Silicon PhotoMultipliers (SiPMs) featuring single-photon sensitivity, and transmits hit data over ∼320k detector channels to the Data Acquisition (DAQ) system.
In the DAQ front-end, data from 4,992 Front-End Boards (FEBs)–each integrating an ALCOR64 ASIC to digitize signals from 64 SiPMs–are aggregated by 1,248 Readout Boards (RDOs) and transmitted via VTRx+ optical links to 30 back-end Data Aggregation and Manipulation Boards (DAMs). Each DAM–implemented with the FPGA-based FELIX-155 PCIe card from the ATLAS experiment–merges data from up to 42 RDOs and transfers them via PCIe to host memory, which then forwards event fragments to the experiment buffering system via 100 GbE.
During operation, radiation-induced damage is expected to increase the SiPM Dark Count Rate (DCR) to peaks of 300 kHz per channel. Given this high noise floor and the low percentage of bunch crossings producing physics events, the necessity of an online data reduction system classifying and filtering the DCR noise-only events in
the DAQ back-end to maintain its output data rate at a manageable level has emerged.
We present a design based on a local triggering scheme, where the trigger signal evaluation is performed by an online classifier implemented as an MLP-based neural network distributed over the multi-FPGA system composed of 30 DAMs plus a dedicated Trigger Processor (TP) card.Speaker: Cristian Rossi (INFN Sezione di Roma) -
16:40
Implementation of New Time Protocols on CTS Board for Clock Synchronization 20m
Event building from different data sources generated by data acquisition (DAQ) electronics modules is an important task for successful physics experiments. Timestamp-based event building is one of the methods widely used conventionally and is becoming more challenging with the introduction of new timing protocols. On the other hand, not every electronics requires to be replaced by new ones in existing DAQ experimental setup systems. Therefore, synchronizing different clock frequencies, as the DAQ systems transition to more resource-demanding systems (i.e. streaming readout), is a task with high priority.
The Clock Timing Synchronizer (CTS) board is designed to provide clocks synchronized with sub-nanosecond level to DAQ systems in which conventional and new electronics coexist and are long distance apart. The White Rabbit PTP Core and the MIKUMARI firmwares have been implemented for CTS equipped with Xilinx Kria K26C MPSoC and tested. In this presentation, the CTS board will be introduced together with the implementation of the White Rabbit PTP Core and the MIKUMARI protocol, the feature of implemented firmwares, the timestamp extraction test results, and the capability of the board. This work is a collaboration between FRIB and the SPADI alliance.Speaker: Genie Jhang (Michigan State University (US)) -
17:00
A Modular Framework Architecture for the ATLAS Global Event Processor at the HL-LHC 20m
Abstract—
The High-Luminosity upgrade of the Large Hadron Collider (HL-LHC) at CERN will increase the instantaneous proton-proton collision rate by a factor of 5–7 relative to the current LHC, delivering an integrated luminosity of approximately 3000–4000 fb⁻¹ over its operational lifetime. This substantial increase in collision rate, pile-up, and event complexity requires a complete redesign of the ATLAS trigger and data acquisition (TDAQ) system.
A key component of this redesign is the Global Event Processor (GEP), which performs low-latency event selection, filtering, and routing in real time. Following the upgrade, the ATLAS detector is expected to generate data at rates in excess of 128 Tb/s, necessitating aggressive real-time data reduction to meet storage and bandwidth constraints.
The upgraded TDAQ architecture employs a distributed processing model in which collision events are delivered every 25 ns to a pool of FPGA-based GEP units in a round-robin fashion. Detector data is compressed to approximately 60 Tb/s for optical transmission and decompressed upon arrival at the GEP. Data arrival latencies vary from 3.18 µs to 6.26 µs after a collision due to heterogeneous detector technologies. Within each GEP, event processing is performed by a directed acyclic graph (DAG) of algorithm processing units (APUs) operating under a strict latency constraint of 7.66 µs after the collision event.
This paper presents a novel modular architecture enabling asynchronous data arrival, buffering, synchronization, and high-throughput processing across networks of streaming APUs, suitable for implementation in single- or multi-die ASICs or FPGAs.Speaker: Jeff Eastlack (Michigan State University (US))
-
15:00
-
17:20
→
17:50
Coffee break 30m
-
17:50
→
19:30
Data Acquisition and Trigger Architectures Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
-
17:50
Triggerless Data Acquisition for Online Reconstruction in High-Rate Experiments 20m
Traditional triggerer DAQ performs explicit hardware rate control: only events passing a fast trigger are fully read out. This reduces throughput and buffering demands, but can lose physics because the decision uses partial detector information and a short time window, and it imposes pipeline buffering on the trigger-latency timescale.
Triggerless (free-running/streaming) DAQ streams all zero-suppressed hits and moves selection to an online reconstruction farm. The key constraint is in-order processing: the farm consumes fixed-width time slices strictly in chronological order, so the event-building network must output one globally time-ordered stream. Variable transport delay creates cross-lane skew; any late slice causes head-of-line blocking and reordering buffer growth, so strict ordering requires an order-preserving barrier that waits for the slowest lane. Thus, the choice and placement of sort/merge primitives directly determine system-wide total buffer size (which can be even unbounded) and sustainable throughput.
We review prior sorter/merger approaches and show why they do not scale to high-rate streaming. We then present a novel architecture: a cascading non-order-preserving merger$\rightarrow$sorter(resequencer) pipeline feeding a single final barrier. Under our scheme, locally bounding skew at each fan-in stage prevents system-wide accumulation, minimizes end-to-end buffering, and maximizes sustained throughput under realistic burstiness and transport variability.
Speaker: Yifeng Wang (ETH Zurich (CH)) -
18:10
CGEM -IT Data Acquisition system 20m
A dedicated full readout chain has been developed for the inner tracker CGEM (Cylindrical Gas Electron Multiplier) of BESIII, the Beijing electron-positron spectrometer, which was installed at the end of 2024 to replace the inner drift chamber.
It consists of a dedicated ASIC, called TIGER, and a readout module based on the ARRIA-V FPGA, called GEMROC. The system reads out about 10,000 detector strips with a trigger rate of up to 4 kHz. Dedicated ancillary fanout modules collect the fast signals and distribute them to the GEMROCs at both ends of the spectrometer.
A new server running an updated version of GUFI (Graphical User Front-end Interface) has been deployed. It communicates with the BESIII DAQ via the TCP/IP protocol, enabling synchronized start and stop of acquisition, error monitoring, and full expert control over threshold settings, noise assessment, and data transmission integrity. It has also been extended with a dedicated acquisition and disk writing module. This highly parallelized component spawns processes that receive data from UDP packets and enqueue them into buffers, which are then emptied by dedicated writing processes to optimize the use of the machine’s hardware resources.
The adoption of this software-based solution demonstrates the ability to shift part of the acquisition workload from firmware to software, particularly by leveraging Python. This approach significantly reduces development and testing time while improving system scalability and ease of updates.
The presentation will focus on the CGEM DAQ in the context of the first months of BESIII beam commissioning.Speaker: Gianluigi Cibinetto (Universita e INFN, Ferrara (IT)) -
18:30
Studies of FPGA accelerated track reconstruction for the ATLAS Event Filter 20m
The upcoming high-luminosity phase of the LHC (HL-LHC) presents several challenges for the ATLAS experiment's Trigger and Data Acquisition system, necessitating a full upgrade of the system. A key challenge for the Event Filter, where high-level event reconstruction and final event selection will run at 1 MHz, lies in the computational demand for online track reconstruction within the Inner Tracker. Over the past few years, extensive research has been conducted into utilising hardware accelerators in the ATLAS Event Filter system to improve tracking throughput and reduce full-system power consumption. Various end-to-end track reconstruction pipelines have been developed using GPUs and FPGAs. These pipelines demonstrate their capabilities by offloading different amounts of the computing load to the accelerators.
This contribution focuses on developments in FPGA-based track reconstruction pipelines integrated into the ATLAS software framework, Athena. A high-throughput FPGA accelerator for hit clustering and data preparation has been implemented in hardware, and various algorithmic extensions have been studied. The results will be compared with those of the CPU and GPU counterparts.
Speaker: Kevin Sedlaczek (Northern Illinois University (US)) -
18:50
Status and testing of the MDT Trigger Processor for the ATLAS Level-0 Muon Trigger at HL-LHC 20m
The Monitored Drift Tube Trigger Processor (MDT-TP) will improve the rate capabilities of the first-level muon (L0 Muon) trigger of the ATLAS Experiment during the operation of the HL-LHC.
Preliminary information about a muon trigger candidate, obtained by other muon trigger subsystems, will be combined with the precision of the MDT chambers in order to improve the muon momentum resolution, while limiting the trigger rate to an acceptable level in the high-pileup environment of the HL-LHC.
The MDT-TP trigger logic is implemented on an AMD VU13P FPGA, where MDT hits are extracted around the region-of-interest identified by the trigger candidate and are used to perform muon reconstruction and transverse momentum estimation. For the events selected by the L0 trigger, MDT hits are transmitted by the MDT-TP to the ATLAS data acquisition system via FELIX. Monitoring, configuration and interfaces with other ATLAS subsystems are implemented via services running on an AMD Zynq SoM.
Several tests of the MDT-TP are being conducted, including the configuration and monitoring of the MDT-TP and the on-detector electronics, communication with other L0 Muon trigger boards, on-hardware validation of the trigger and readout logic and readout via FELIX. The current status of the prototype testing and the recent updates on gateware and software developments will be presented.
Speaker: Rimsky Alejandro Rojas Caballero (University of Massachusetts (US)) -
19:10
Data Acquisition Architecture in TELE-NEURART project 20m
TELE-NEURART is an Italian-scale virtual paediatric network aimed at advancing the management of neurodevelopmental disorders through (i) tele-monitoring in ecologically valid home settings, (ii) remote telerehabilitation protocols, and (iii) AI-driven identification of digital biomarkers to support personalised yet nationally standardisable care pathways. The network connects specialised centres across Italian regions and relies on a shared research infrastructure for protocol harmonisation and data-driven clinical translation.
This paper presents the Data Acquisition Architecture designed to enable real-time, multi-site acquisition, integration, and processing of heterogeneous clinical and digital data. The DAA targets key challenges typical of paediatric distributed deployments: (1) data heterogeneity across imaging and diagnostic devices, robotic/telerehabilitation platforms, multimodal sensors and structured clinical forms; (2) semantic and technical interoperability, addressed through normalisation and harmonisation workflows and mapping to recognised clinical/functional frameworks, enabling aggregation and reproducible analytics; (3) multi-tier data lifecycle management, combining local storage and local analytics for quality control and pre-processing at the edge, latency reduction for monitoring, and sustainable synchronisation toward federated/central repositories; (4) robustness in real-world contexts, mitigating missing data, noise, device drift, and temporal misalignment via validation, metadata management, device/protocol versioning, time synchronisation, and buffering; and (5) privacy and governance by design, integrating pseudonymisation/anonimisation, access control, audit logging, consent governance, and secure data handling, which are critical for clinical data from minors.
Overall, the proposed DAA provides an AI-ready foundation supporting traceable pipelines from raw signals to curated datasets and derived features, enabling scalable development and validation of clinically actionable digital biomarkers within a distributed paediatric care ecosystemSpeaker: Pierpaolo Di Bitonto (Università degli Studi di Bari Aldo Moro)
-
17:50
-
19:30
→
22:00
Gala dinner 2h 30m
-
08:30
→
09:10
-
-
09:00
→
10:00
Front-End Electronics, Fast Digitizers, Fast Transfer Links & Networks Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
Convener: Carl Grace-
09:00
Development of a Cost-Efficient 64-Channel Multi-Phase Clock TDC on an Artix-7 FPGA 20m
Time-to-Digital Converters (TDCs) are indispensable components in data acquisition systems for particle and nuclear physics experiments. While performance requirements vary by detector, applications such as Multi-Wire Drift Chambers (MWDCs) and Parallel Plate Avalanche Counters (PPACs) demand high channel density with moderate resolution, ranging from 100 ps to 1 ns.
To address this, we have developed a cost-optimized TDC implemented on the AMD Artix-7, an affordable FPGA device. Although the multi-phase clocking technique is well-established, this design focuses on achieving high channel density and sufficient resolution by effectively utilizing the resources of the device. By employing manual routing to finely tune propagation delays between internal components, we successfully integrated 64 signal channels plus one trigger channel into a single chip.
Two firmware versions were developed: a 312.5 ps least significant bit (LSB) version supporting both leading and trailing edge detection, and a 125 ps LSB version for leading edges. The system features a trigger-matching mode to filter hits within a programmable coincidence window. This TDC has already been deployed in accelerator-based nuclear physics experiments, demonstrating stable operation. This contribution reports on the implementation strategy using manual routing, the achieved performance, and the practical advantages of using cost-efficient FPGA platforms for high-density TDC systems.Speaker: Hidetada Baba (RIKEN Nishina Center) -
09:20
Machine-Learning-Based Waveform Discrimination in the Front-End Electronics of the Belle II Central Drift Chamber for Cross-Talk Noise Reduction 20m
Machine learning (ML) inference on FPGAs has been widely adopted in real-time triggering of collider experiments for detector signature identification. The ML application in Front-End Electronics (FEE) has not yet been fully explored, primarily due to constraints such as limited FPGA resources and localized detector coverage.
In this work, we develop an ML-based waveform discrimination method for the Central Drift Chamber (CDC) of Belle II to reduce cross-talk noise at the front-end level. The Belle II CDC is a key charged-particle tracking detector for both offline and the real-time hardware trigger. During Belle II operation, background wire hits have been observed in the CDC FEE, where multiple hits occur in neighboring anode wires by large energy deposit. The hardware track trigger employs a Hough transformation based on track segments formed by combining hits from multiple wire layers. Due to the reduced dimension and the coarse mesh size in the conformal plane, the hardware track trigger is particularly sensitive to cross-talk noise, hence resulting in an increased fake trigger rate with higher luminosity in the future.
In our implementation in the FEE's Xilinx Virtex-5 FPGA, ML modules are operating in parallel on individual wire channels. Given its limited resources compared to modern devices, the primary challenges are not only achieving sufficient signal-noise discrimination, but also minimizing FPGA resource usage. We report on the ML model development, FPGA deployment, and validation with Belle II operation. The prospects for future Belle II upgrades incorporating this intelligent Front-End application will also be outlined.
Speaker: Prof. Yun-Tsung Lai (KEK) -
09:40
An all-in-one front-end board for the streaming readout of gaseous detectors used in experimental physics 20m
The need for continuous measurement without triggering events, known as streaming data acquisition, is crucial for many experiments nuclear, particle, and cosmic ray physics. In Japan, the SPADI alliance provides complete streaming readout systems from front-end hardware to the data acquisition software, and is designed to be easily deployable and scalable. While several candidates for front-end hardware exists in the SPADI alliance, the current system to readout from gaseous detectors is split into two parts; the AGASA based analog front-end board, and the AMANEQ TDC (digital front-end) module. However, this system could be improved in several aspects such as its size, cooling, cost, cabling, and many more. Hence, we designed a new all-in-one board, named STAG (streaming readout with AGASA for gaseous detectors), which combines the analog front-end and digital front-end to one module. This greatly reduced the size of the front-ends, but also satisfies the other constraints faced by the current system. Further, to satisfy the streaming readout capabilities, this board is designed to handle data rates up to a few Mcps, and a TDC resolution of 200~300 ps. This talk will present about the development, evaluation and the future plan of the STAG board, and the related streaming readout system.
Speaker: Lakmin Wickremasinghe (RCNP, The University of Osaka)
-
09:00
-
10:00
→
10:30
Coffee break 30m
-
10:30
→
12:30
AI, Machine Learning, Real Time Simulation, Intelligent Signal Processing Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
-
10:30
Deep-learning based Real-time Optical Plasma Boundary Detection for Plasma Shape Control on EAST Tokamak 20m
Real-time and accurate plasma boundary reconstruction is critical for tokamak plasma control. Visible light diagnostics offer a promising solution for plasma shape control during steady-state discharges. In this study, a two-stage plasma boundary detection framework (YOLO-GRAY) was developed on EAST tokamak. The framework utilizes YOLO to rapidly localize the optical emission region, followed by grayscale feature detection to precisely determine the optical boundary position. This approach enables more precise and robust boundary extraction. However, due to modifications in visible light diagnostics, the detection accuracy of the previously developed YOLOv8n-seg-CBAM model dropped to 0.493 for images with a new field of view (FOV). By employing transfer learning with only 50 new FOV images, accuracy improved to 0.963. Furthermore, the algorithm maintains robust performance during long-pulse operations. In a 450 s experiment, the Last Closed Flux Surface(LCFS) exhibited accumulated deviations of 0.5 cm in $Gap_{in}$ and 1.5 cm in $Gap_{out}$, whereas optical boundary positions remained stable. Building upon this, a real-time reconstruction system based on heterogeneous computing was developed. Hybrid CPU-GPU scheduling ensures single-frame reconstruction within 1.6 ms. Utilizing this system, multi-point plasma shape control based on optical boundaries was successfully demonstrated for the first time on EAST, validating the feasibility of optical-based plasma shape control. Overall, this system holds significant promise for plasma control in future fusion devices.
Speaker: Qirui Zhang (Institute Of Plasma Physics, Chinese Academy Of Sciences) -
10:50
Rapid Neutron Source Identification with Scatter-Based Spectrometers 20m
Reliable neutron source identification is essential for nuclear nonproliferation, safeguards, and homeland security, yet remains challenging due to the ill-conditioned nature of neutron spectral inversion. Here, we present a scalable Bayesian framework that overcomes these limitations through evidence-based model selection. We introduce a Bayesian Evidence Adaptive Pursuit (ABEP) algorithm that efficiently explores the combinatorial space of possible source ensembles by iteratively ranking and pruning candidate models using Bayesian evidence. We benchmark the framework using both experimental measurements and synthetic datasets generated with high-fidelity Monte Carlo simulations. The results demonstrate accurate identification of both single- and multi-source ensembles with high statistical significance ($>\!3\sigma$) and favorable scaling with detected event counts ($\sim\!\!10^3$ for single-source identification). These findings establish ABEP as a practical and robust tool for neutron source identification and significantly extend the operational capabilities of scatter-based neutron spectrometers in nuclear security and safeguards applications.
Speaker: Dr David Breitenmoser (University Of Michigan) -
11:10
Real-Time Toroidal Equilibrium Reconstruction for RFX-mod2 via Quantized Neural Network in FPGA 20m
In preparation for the forthcoming operation of RFX-mod2 in reversed-field pinch configuration, a major upgrade to the experiment’s real-time control system is the transition from a simplified cylindrical approximation to the more accurate toroidal geometry.
However, the existing toroidal reconstruction code is only currently available for post-shot analysis, as it does not satisfy the timing constraints of the control system.We propose a surrogate model based on Neural Networks (NNs) that serves as the first component in a chain-of-models architecture for real-time magnetic perturbation estimation.
This model approximates the radial profiles of the poloidal and toroidal components of the equilibrium magnetic field, as well as the Shafranov shift.
To enable implementation on FPGA hardware, we apply quantization-aware training (QAT) to convert the model weights from standard 32-bit floating-point representation to fixed-point formats.
By treating the model’s weight bit-width as an additional optimization parameter and using Effective Bit Operations (EBOPs) as a proxy for silicon usage, we perform a multi-objective optimization that explores the trade-off between accuracy and resource cost.
In a subsequent stage, we further analyze the relationship between latency and FPGA resource usage, tuning latency to meet the required real-time performance.
Finally, we test the implemented model across the range of input parameters typically encountered in the experiment, demonstrating that it reproduces target quantities within an acceptable margin of error while meeting timing constraints and achieving substantial savings in FPGA resources compared with more straightforward approaches.Speaker: Lorenzo Saccaro (Università di Padova-Centro Ricerche Fusione, Italy; Consorzio RFX) -
11:30
Trigger-level track reconstruction and identification with machine learning and FPGAs for ATLAS 20m
At high-luminosity hadron colliders, trigger systems play a key role in preserving sensitivity to rare signals of new physics, even as they filter out the overwhelming background. This becomes especially challenging at the upcoming High-Luminosity Large Hadron Collider (HL-LHC), where extremely high event rates and pileup conditions are expected at the ATLAS experiment. We explore the integration of charged-particle tracking directly at the trigger level using machine learning. We present a neural network–based approach that predicts and associates detector hits belonging to the same track, operating in real time within the second-level trigger. Designed with a bottom-up philosophy, the model is optimized for simplicity and minimal input, making it well-suited for Field-Programmable Gate Arrays (FPGAs) deployment. This hardware implementation enables fast data processing with low latency, while offering the flexibility to evolve the algorithm over time.
Speaker: Punit Sharma (Brookhaven National Laboratory (US)) -
11:50
Towards FPGA–Memristive In-Memory Computing for Real-Time Inference in Artificial Intelligence 20m
As trigger and data acquisition system complexity grows, processing near the detectors becomes essential to cope with throughput and event selection requirements. In this context, real-time artificial intelligence (AI) is gaining momentum. While FPGAs with AI engines are available, they are limited by power consumption, area, and latency due to the von Neumann bottleneck. In-memory computing mitigates these limitations by co-locating data and processing. Using conductance-tunable analog computing elements enables power-efficient vector-by-matrix multiplications, outperforming complementary metal-oxide-semiconductor (CMOS) digital processing by orders of magnitude. However, programming the conductance requires digital-to-analog converters (DACs) and analog-to-digital converters (ADCs), adding complexity and power overhead, and limiting scalability in purely digital integrated circuits.
This work is a pioneering exploration of memristive–CMOS hybrid electronics for experimental physics, covering conductance programming experiments for memristors and the development of all-digital DACs and ADCs for hybrid memristive–FPGA computing. Our bench-top measurements demonstrate that it is possible to tune the conductance of KnowM self-directed channel memristors over eight non-overlapping conductance states within a 50–250 μS range, with a 10% error. Our ADC has a Wilkinson architecture and incorporates a 400 MSps time-to-digital converter based on a tapped delay line with 3.5 ps elements. It operates with 6.2 effective number of bits (ENOB) at 1 MHz over a 1.5 V input range, and full-scale nonlinearity below 1%. Our DAC, based on a digitally controlled delay line, operates up to nearly 500 kHz with 7.9 ENOB, covering a 2.3 V output range.Speaker: Dr Raffaele Giordano (Universita di Napoli Federico II (IT)) -
12:10
Co-Design for Ultra-Small ML on FPGAs with an Open-Source Toolchain and AI Agentic Workflow 20m
Modern advances in machine learning and microelectronics enable efficient real-time, on-chip data processing under strict latency, power, and bandwidth constraints. Compact models implemented directly in hardware can replace fixed logic to perform intelligent feature extraction, classification, or denoising at the detector front-end, supporting applications from detector readout in high energy physics to adaptive control, accelerator diagnostics, and low-latency autonomous systems. We present an example of “edge” signal processing for next-generation detector readout, using a neural network tailored to fit within a highly latency- and resource-limited system (much lower than that of traditional commercial FPGAs). We highlight the design choices required to translate an algorithm into an efficient hardware implementation, including model sizing, quantization, and dataflow considerations. Finally, we discuss how fully open-source toolchains and AI-assisted development through the use of agents can accelerate cross-disciplinary co-design by streamlining the workflow from model design to hardware deployment, improving reproducibility, and enabling faster iteration for both research and large-scale development.
Speaker: Qibin Liu (SLAC National Accelerator Laboratory)
-
10:30
-
12:30
→
15:00
Lunch break 2h 30m
-
15:00
→
15:40
Awards Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
-
15:40
→
17:20
Real Time Diagnostics, Digital Twin, Control, Monitoring, Safety and Security Maria Luisa Room (Hotel Hermitage)
Maria Luisa Room
Hotel Hermitage
-
15:40
Fast Beam Position Calculation Implemented in FPGA 20m
This paper presents the design of electron beam position processor with a fast beam position calculation implemented in FPGA. The beam position processor will be adopted to the free electron laser and high magnetic field (FEL-HMF) facility constructed by Anhui University in Hefei, China. The beam position processor consists of a RF front-end board, an ADC and FPGA board and an AC/DC power supply module, which is able to process 476 MHz RF signal from beam position pick-ups in sampling rate of 250 Msps. The RF front-end board cascading two low-pass filters, two bandpass filters, two low-noise amplifiers and a digital attenuator, achieving performances of out-of-band suppression >60 dB across the 466–495 MHz frequency range. Its total gain is 40 dB and controllable gain is 31 dB with 1 dB step. The ADC and FPGA board have two dual-channel ADC chips and an Zynq SoC XC7Z045. We have implemented all digital signal process and data calculation in FPGA to reduce latency and enhance throughout. We test the beam position processor and compare the results of x and y with CPU calculation. The results show that the total processing latency is less than 100 sampling clocks, the position precision STD for x-axis direction is 8.8 μm and for y-axis direction is 1.4 μm, and the differences between CPU and FPGA are 1.3 μm and 0.52 μm.
Speaker: Mr Wei Peng (Anhui University) -
16:00
Online PID in the RIBF DAQ system using Alveo/Versal data-center accelerator cards 20m
High-speed data processing in DAQ systems using hardware accelerators is gaining traction to cope with the increasing intensity of accelerator beams. We are investigating the potential of implementing such accelerator devices for the DAQ system at the RIKEN RIBF. AMD’s Alveo series and Versal VCK5000 cards, which leverage FPGA and ACAP architectures, support PCI Express for seamless integration into standard workstations. We have implemented a test bench for the online PID monitoring system, which consists of three machines: 1. A DAQ machine, running the standard RIBF DAQ and event builder software along with a data replayer that generates the raw data packet stream using previously recorded experimental data. 2. The TX machine, equipped with an Alveo U50 card, receives the re-built event blocks from the DAQ machine via TCP. The received raw events are processed directly in-memory and transmitted via the QSFP28 port with a 40 Gbps communication throughput. 3. The RX machine, fitted with a Versal VCK5000 card, receives the data and performs calibration, track reconstruction, and particle identification for the BigRIPS separator. The processed data is continuously saved using double buffering and visualized in real-time. We have successfully demonstrated the end-to-end operation of this scheme, achieving a visualization latency, measured from the start of data acquisition, of less than one second, including all communication and data processing. In this contribution, we will present the current status and future prospects of this project.
Speaker: Yuto Ichinohe -
16:20
Evaluation of a Real-Time FPGA-Based Thomson Scattering Diagnostic with diagnostic-to-interlock communication for enhanced operational flexibility at ASDEX Upgrade 20m
The ASDEX Upgrade (AUG) programme aims to resolve critical physics questions for ITER operation and the development of plasma scenarios for a future fusion reactor. While diagnostics are regularly upgraded to support these efforts, safety systems remain in place to protect the machine. As a pulsed device, each plasma discharge is valuable, making losses due to false positive interlock actions costly. The safety systems, including those linked to neutral beam injection (NBI), are configured conservatively because beam shine-through and its potential to damage plasma-facing components, in low density plasma. Though well-studied, providing better real-time and reliable data was not always possible. A new implementation using intelligent data acquisition systems enables real-time in FPGA processing of the Thomson diagnostic density evaluation and direct communication, according to a set of rules, with the neutral beam injection interlock system to relax the thresholds of operation of the latter. To achieve these goals, the Thomson scattering evaluation has been ported to real time with development tools that use High-Level Synthesis combined with traditional hardware description languages. The hardware platform used is a Teledyne-ADQ32 DAQ device. Second, a new decision-making module has been implemented, allowing direct diagnostic to interlock system communication. The results are always shared with the integrated discharge control system (DCS) real-time application, but it is not necessary, making it a software independent interlock system. The contribution describes the hardware and software elements used in the evaluation, the methodology used for the hardware and software implementation, and the results obtained.
Speaker: Cesar Gonzalez Brito (Universidad Politécnica de Madrid) -
16:40
MDSplus Redis Based Distributed Dispatcher for ITER Neutral Beam Test Facility 20m
The ITER Neutral Beam Test Facility (NBTF) supports the development and validation of the neutral beam injection systems required for ITER. Its two major experiments, SPIDER and MITICA, operate complex infrastructures that demand reliable coordination across many distributed subsystems. Recent SPIDER campaigns, following the 2024 restart with caesium‑assisted operation, demonstrated improvements in beam uniformity and current density with upgraded RF generators and enhanced pumping. In parallel, MITICA advanced toward integrated megavolt‑level operation. These activities highlight the need for a robust dispatching framework capable of managing tightly coupled tasks under real‑time constraints.
To address the limitations of the earlier centralized dispatcher, a new MDSplus Redis‑based architecture has been developed. The system introduces a distributed model composed of a central Action Dispatcher and multiple Action Servers organized by function. Redis, used as both message broker and shared state repository, provides low‑latency task queues, publish/subscribe communication, and persistent state recovery. This design improves scalability, modularity, and fault tolerance, while enabling parallel execution of independent actions and rapid system‑wide synchronization.
A Python‑native Web Monitor complements the dispatcher by offering real‑time views of server activity, action progression, logging streams, and heartbeat diagnostics. Implemented with Flask and Gunicorn, it ensures alignment with the Redis‑based backend and reduces integration complexity compared to earlier multi‑language solutions. The new system has been validated in full discharge cycles in both SPIDER and MITICA and is now being adopted by external laboratories, demonstrating its suitability for large‑scale, distributed fusion experiments.
Speaker: Nuno Cruz (Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, 1049-001, Lisboa, Portugal) -
17:00
FPGA-Based Autonomous Data Logging for Real-Time Beam Monitor Signal Processing in Proton Therapy 20m
We present an FPGA-based autonomous logging system integrated into the control infrastructure of a proton therapy facility, designed to optimize detector signal
processing while maximizing memory efficiency. The system manages up to four beam
monitors connected via optical communication links, each providing measurement samples at 10 µs intervals.
The FPGA implements intelligent threshold-based data acquisition, automatically
initiating logging when signals exceed a pre-defined on-threshold and terminating when
they fall below an off-threshold. This autonomous operation ensures that only relevant
data segments are captured, eliminating unnecessary storage of baseline and transition
periods. The system provides 218 × 16 byte recording capacity, enabling continuous
logging of up to 2.6 seconds at full sampling rate. Extended observation periods are
achieved through configurable 2n sample compression, where the FPGA calculates realtime mean values for all four channels and variance/standard deviation for two selected
channels.
The control system, running on an IOC board with attached FPGA, processes the
logged data during idle states. Post-processing algorithms automatically trim signal
rise and fall-off periods from each recorded data bundle, isolating the stable plateau
regions across all four detectors. Averaged plateau values enable precise calculation of
detector signal ratios, which are used to calibrate scaling factors that compensate for
beam-line transmission variations, ensuring stable downstream beam currents essential
for reliable treatment delivery.
This integrated approach combining autonomous FPGA-based data acquisition
with efficient post-processing demonstrates significant improvements in memory utilization and signal quality for real-time beam monitoring applications.Speaker: Christian Groh (Paul Scherrer Institute)
-
15:40
-
17:20
→
17:50
Closing talks Maria Luisa Room (Hotel Hemitage)
Maria Luisa Room
Hotel Hemitage
-
09:00
→
10:00