AI Algorithms in Pixel Detectors

Asia/Kolkata
Audtotorium (Physical Sciences Building, IISc)

Audtotorium

Physical Sciences Building, IISc

    • 1
      AI Algorithms in Pixel Detectors

      Abstract:
      The integration of AI algorithms directly into pixel detectors presents a transformative approach to managing the substantial data volumes generated by high-energy physics experiments, X-ray imaging and other applications. We have investigated two diverse applications and developed a design flow for Algorithm to Accelerator which spans from creating “use inspired” specification to generating datasets to on-chip implementation and testing.

      I will highlight some of problems we encountered and the results we obtained:

      In the context of highly granular pixel detectors used for tracking charged particle tracks, a neural network has been developed to filter out low-momentum tracks, thereby reducing data volume by up to 75.7%. This network operates with minimal power consumption and small area footprint, making it suitable for implementation in custom readout integrated circuits using 28 nm CMOS technology. The approach leverages the physical properties of charge clusters and the precise measurements provided by the detectors to enhance data reduction efficiency and physics performance at high luminosity environments like the High-Luminosity Large Hadron Collider.

      Similarly, in the domain of X-ray detectors, algorithms such as Principal Component Analysis (PCA) and AutoEncoders (AE) have been implemented within the pixelated read-out integrated circuits (ROICs) for lossy data compression. The PCA achieves a 50× compression, while the AE achieves 70× compression, both designed to minimize the off-chip data transfer bottlenecks. These techniques are integrated into a 65 nm CMOS process, highlighting the synergy between advanced CMOS technology and machine learning for efficient data handling. The compression algorithms not only reduce the data volume but also maintain the accuracy required for image reconstruction and scientific analysis, demonstrating the potential of AI to revolutionize data processing in scientific instrumentation.

      Finally, I will conclude with an outlook on utilizing emerging technologies and application specific hardware-software co-design.

      About the speaker:
      Farah Fahim is a senior engineer specializing in mixed signal ASIC design. For the past 15 years she has developed low-noise, high-speed, reconfigurable pixel detectors which operate in harsh environments for a variety of applications including high-energy physics, photon science and space science. Her research is currently focused on low-power, deep cryogenic readout and control electronics for QIS applications. Fahim has a Ph.D. in Electrical and Computer Engineering from Northwestern University. She joined Fermilab in 2009, prior to which, she was an Engineer at Rutherford Appleton Laboratory, UK. She holds five patents and several records of invention. She received the best presentation award at IEEE NSS in 2016.
      She is now the Division Head, Microelectronics, Emerging Technologies Directorate, Fermi National Accelerator Laboratory, USA

      Speakers: Farah Fahim (Fermi National Accelerator Lab. (US)), Farah Fahim (Fermilab)