Speaker
Description
Modern advances in machine learning and microelectronics enable efficient real-time, on-chip data processing under strict latency, power, and bandwidth constraints. Compact models implemented directly in hardware can replace fixed logic to perform intelligent feature extraction, classification, or denoising at the detector front-end, supporting applications from detector readout in high energy physics to adaptive control, accelerator diagnostics, and low-latency autonomous systems. We present an example of “edge” signal processing for next-generation detector readout, using a neural network tailored to fit within a highly latency- and resource-limited system (much lower than that of traditional commercial FPGAs). We highlight the design choices required to translate an algorithm into an efficient hardware implementation, including model sizing, quantization, and dataflow considerations. Finally, we discuss how fully open-source toolchains and AI-assisted development through the use of agents can accelerate cross-disciplinary co-design by streamlining the workflow from model design to hardware deployment, improving reproducibility, and enabling faster iteration for both research and large-scale development.
| Minioral | Yes |
|---|