Speaker
Description
The High Luminosity Large Hadron Collider (HL-LHC) will produce at least 250 inverse femtobarns of data per year. In order to analyze this data, we need to produce a substantial number of events. This possesses a considerable challenge to the already-optimized full CMS detector simulation that uses Geant4. One avenue being explored is modifying the simulation parameters to process events even more quickly, but with reduced accuracy. Machine learning algorithms would then be applied to the reduced-accuracy output to get a high-quality final result. This contribution talks about first steps in this direction where we vary parameters such as RusRoNeutronEnergyLimit and RusRoProtonEnergyLimit, and their combinations in the detector simulation and impacts on the running time and physical output.