EPICS Collaboration Meeting, September 2025

US/Central
Palmer House Hilton

Palmer House Hilton

17 East Monroe Street Chicago, IL 60603, USA
Andrew Johnson (Argonne National Laboratory)
Description

Welcome!

The EPICS Collaboration has traditionally held a Collaboration Meeting at the ICALEPCS conferences that take place every 2 years. ICALEPCS 2025 will be located in downtown Chicago.

EPICS meetings provides developers and managers from EPICS sites an opportunity to discuss contributions to shared projects, and to consider future developments. Participants can see how other organizations are using the software, learn about new and updated EPICS tools, and discuss enhancements to existing tools. The aim of these meetings is to provide an opportunity to meet other users, maximize the usefulness of EPICS to the whole community, and avoid unnecessary duplication of effort.

This EPICS Meeting is a full-day conference workshop scheduled for the Saturday before the main conference; all meeting attendees must register for that workshop through the ICALEPCS registration process.

Please submit an abstract for your talk through the Call for Abstracts page on this website. Note that you will probably need to create a new account for this site, it doesn't appear to share authentication with any other Indico website.

Standard talks (20 minutes including questions) offered for this meeting should not duplicate presentations (aural or poster) given at the main ICALEPCS conference. Lightning talks (5 minutes, no questions) allow smaller development projects and proposals to be presented, and may refer to talks and posters being presented in the main ICALEPCS meeting. Standard talks that request it may be given more time if any is available, see the abstract submission instructions for details.

Organizers
    • 08:00 09:00
      Registration 1h
    • 09:00 09:10
      Welcome: Introduction Adams Room

      Adams Room

      Palmer House Hilton

      17 East Monroe Street Chicago, IL 60603, USA
      Convener: Mr Andrew Johnson (Argonne National Laboratory)
      • 09:00
        Welcome 10m Adams Room

        Adams Room

        Palmer House Hilton

        17 East Monroe Street Chicago, IL 60603, USA

        Meeting metadata

        Speaker: Mr Andrew Johnson (Argonne National Laboratory)
    • 09:10 10:30
      Standard talks: T1 Adams Room

      Adams Room

      Palmer House Hilton

      17 East Monroe Street Chicago, IL 60603, USA
      Convener: Mr Andrew Johnson (Argonne National Laboratory)
      • 09:10
        Converting the GSECARS APS Beamlines from VME 20m

        Most APS beamlines are using VME crates as a major part of their control system.
        The VME hardware is expensive and becoming obsolete, with replacements for many of the above items no longer available. The VxWorks software is also expensive.

        Prior to the shutdown for APS-U April 2023 the 4 GSECARS beamlines at sector 13 were using an EPICS system based primarily on 7 VME crates running VxWorks.

        During the shutdown we completely replaced the VME systems with the following:
        • 52 Galil DMC-4183 motor controllers replaced the OMS-58 and MAXv. VME controllers.
        • 7 16-channel Moxa terminal servers replaced the VME serial communication modules.
        • Measurement Computing USB-CTR08 replaced the Joerger and SIS 3820 scalers and multi-channel scaler functions.
        • Measurement Computing USB-3104 replaced the DAC128V for analog output.
        • Measurement Computing USB-1808X replaced the IP-330 for analog input
        • Measurement Computing E-DI024 replaced the IP-Unidig for high-density digital I/O.
        • ProSoft MV146-MNET with Modbus/TCP replaced the Allen-Bradley 6008-SV and DCM for EPICS communication with the equipment protection system PLCs.

        The total hardware cost to replace the 7 VME systems for 2 FOEs and 5 experimental stations was less than $200K. This includes more than 400 motor channels.

        Speaker: Mark Rivers
      • 09:30
        Shaking the oac-tree 20m

        This talk will present some exploration of ideas about how the oac-tree sequencer by ITER could be integrated in the surrounding EPICS infrastructure. Topics like implementing reactive sequences, modularizing the trees and utilizing templating to configure reusable modules will be covered.

        Speaker: Timo Korhonen (European Spallation Source ERIC)
      • 09:50
        Concepts for Gating EPICS Alarms 20m

        The Karlsruhe Research Accelerator (KARA) at KIT uses a customized software layer to “activate” EPICS alarms based on the operating state of the accelerator. Since KARA does not run 24/7 and also operates in very different modes, the basic idea is to be able to temporarily disable or enable some EPICS alarms based on certain conditions. To date, we have used our own Java software layer for this purpose. After more than 10 years of using, expanding and maintaining this service, both the wish list of features and the requirements have evolved. Together with the difficulties in maintaining the Java codebase and an upcoming new accelerator, it seems like a good time to re-evaluate the current approach and consider a complete redevelopment. Therefore, we are currently evaluating different approaches for the various aspects of such a system. The goal of this talk is to gather feedback, potential alternative ideas and opinions on how to best approach such a system in the current EPICS ecosystem.

        Speaker: Edmund Blomley (Karlsruhe Institute of Technology)
      • 10:10
        PVA Connections and Threads 20m

        Overview of a bug found in pvaccess that created zombie threads and was affected by both Phoebus and the Archiver Appliance in different setups.

        Speaker: Sky Brewer (ESS)
    • 10:30 11:00
      Coffee break 30m
    • 11:00 11:30
      Lightning talks: L1 Adams Room

      Adams Room

      Palmer House Hilton

      17 East Monroe Street Chicago, IL 60603, USA
      • 11:00
        Centralized EPICS PV Access for VDI Users at NSLS-II via CA Gateway Architecture 5m

        At NSLS-II, EPICS applications for the accelerator and beamlines reside on dedicated VLANs that are isolated for security and network bandwidth. Because clients must run their applications within their respective networks, this poses a challenge for enabling centralized observability and control for facility staff with various roles. We have created a single portal to access EPICS process variables (PVs) across the facility, using Virtual Desktop Infrastructure (VDI) and a dual Channel Access Gateway (CAGW) architecture on a dedicated “EPICS VDI” network. For each beamline and the accelerator, two dedicated CAGW instances are deployed: one on the “EPICS VDI” network serving client applications, and one on the control system VLAN communicating with IOCs. The controls-side gateway bridges the isolated “Controls” network and the routable “Science” network, enabling inter-gateway communication over beamline-specific ports configured by convention and governed by firewall rules.
        EPICS channel access security is enforced with PVs read-only by default, while Active Directory group membership determines beamline-specific write privileges. Any EPICS CA-based client tool can run in the VDI environment, including CS-Studio Phoebus—the primary use case enabling staff to view and interact with PVs across the entire facility from a single session. Having PV access through the VDI portal removes the need for running client software directly in the Controls environment, thereby reducing system exposure and improving architectural separation. CAGW configuration and deployment are automated using Ansible, with templated generation of gateway settings, including network configuration, PV lists, and access control rules. This approach builds on a proven model used for accelerator-beamline communication and has demonstrated stable performance across multiple deployed instances.

        Speaker: Anton Derbenev
      • 11:05
        AreaDetector Monthly Collaboration Meetings 5m

        AreaDetector developers started holding a series of remote collaboration meetings back in February 2024 with the main goals of discussing and addressing pull requests (PRs), defining collaboration policies and procedures for areaDetector repositories, and deciding on administrative matters when required. This initiative increased the volume of discussions and reviews of proposed changes in the following months, while still having a loosely defined agenda for each meeting. After a short cooldown period in the beginning of 2025, a slightly new format was proposed, shifting the goals towards gathering new people interested in maintaining the active repositories active, finding reviewers for stagnated PRs, defining standards for reviews and new drivers, and creating a community which will keep the repositories in good health in the short and long terms; while also discussing recent and stale contributions.

        This talk will cover how these meetings are prepared, the decisions taken, the progress achieved so far, what is still pending, and how this kind of community effort could be applied to other EPICS modules and subcommunities.

        Speaker: Érico Nogueira Rolim (LNLS/CNPEM)
      • 11:10
        Data model based processing: missing building blocks? 5m

        Epics normative types allow exchanging information consistently based on data models, which -- according to our experience -- simplifies establishing processing chains.

        Based on our personal experience we suggest to look at these processing chains using the triple: "aggregate, transform, modify". We think that is useful to add aggregation (or enrichment) or modification of data models as flexible general records to EPICS.

        Speaker: Pierre Schnizer (HZB)
      • 11:15
        OPC UA Device Support - update 5m

        A collaboration (ITER/PSI/ESS/HZB-BESSY) maintains and develops a Device Support module for integration using the OPC UA industrial SCADA protocol. Goals, status and roadmap will be presented.

        Speaker: Mr Ralph Lange (ITER Organization)
      • 11:20
        linStat 5m

        A Linux specific alternative to iocStats. Mining into /sys, /proc, and other dark(ish) corners of Linux for interesting health status to publish.

        Speaker: Michael Davidsaver (Osprey DCS)
      • 11:25
        A Short EPICS Core Updates talk 5m

        Most EPICS Collaboration meetings have a talk from someone in the Core Developers Group which follows this formula: List some recent releases of EPICS and describe the major changes that were included in the latest; introduce some exciting new features that have been merged into the development branch since that last release; outline some functionality that's still being developed or is under review and might be in the next release; and mention that at some point we will create a new Git branch for the next release series, delete a bunch of old code and add some new code that isn't compatible with the older compilers and targets that we won't be supporting any more.

        This is the lightning version of that talk.

        Speaker: Mr Andrew Johnson (Argonne National Laboratory)
    • 11:30 12:30
      Standard talks: T2 Adams Room

      Adams Room

      Palmer House Hilton

      17 East Monroe Street Chicago, IL 60603, USA
      • 11:30
        EPICS IOC Control of Timepix3 Detector System: Emulator, Serval, Detector Driver, and systemd Integration 20m

        Abstract
        In this presentation, we will describe the design, implementation, and integration of a modular EPICS-based control architecture for the Timepix3 detector ecosystem deployed at ORNL neutron and X-ray facilities. Our control framework leverages multiple interlinked components:
        1. ADTimePix3 Detector Driver: Built upon the EPICS areaDetector framework, the ADTimePix3 driver offers production-ready capabilities including real-time data acquisition, health monitoring, threshold tuning, preview imaging, and multi-stream support via socket and .tpx3 file outputs GitHubAreaDetectorOak Ridge National Laboratory. It enables the sparse, triggerless readout of the 65k-pixel Timepix3 chip, achieving up to 40 MHits/s/cm² with simultaneous ToA and ToT recording Oak Ridge National Laboratory.
        2. Serval and Emulator IOCs: Control of the Timepix3 detector is centralized through the Serval HTTP/JSON-based server, which interfaces with the detector hardware. An emulator (emulator IOC) replicates Serval functionality for offline testing, development, and IOC validation. This allows seamless simulation of device behavior without physical hardware.
        3. EPICS IOC Control of systemd Processes: To enhance robustness and integration, EPICS IOCs manage essential background services (e.g. Serval, emulator) using systemd via a custom IOC layer (systemdIOC). This enables EPICS to monitor, start, stop, and supervise service lifecycles such as D-Bus, ensuring reliable orchestration of system components.
        4. Integrated Workflow and Use Cases: During IOC startup, Serval is launched via systemd and then configured by the ADTimePix3 IOC (with IP tunneling, .tpx3 output paths, thresholds, calibration uploads). The emulator IOC mimics Serval behavior for testing. Users interact with the system through CSS-Boy or Phoebus GUIs, adjusting chip thresholds, loading calibration files, viewing preview images, and monitoring detector metrics JACoWAreaDetector.
        5. Benefits & Outcomes:
        ◦ Enables continuous detector operation with real-time previews and high-throughput data capture.
        ◦ Improves testability and IOC development by decoupling hardware access through the emulator.
        ◦ Enhances system reliability and maintainability by managing background services from EPICS, improving startup resilience and operational diagnostics.
        In summary, this layered EPICS IOC architecture—encompassing hardware abstraction via emulator, detector acquisition via ADTimePix3, and service control via systemd—supports flexible deployment, testing, and stable operations of Timepix3-based experiments. We believe this modular approach can serve as a blueprint for integrating complex detector systems into EPICS control environments.

        Speaker: Kazimierz Gofron (Oak Ridge National Laboratory)
      • 11:50
        EPICS Fundamentals Refresh 40m

        EPICS was developed as a toolkit for implementing scientific control solutions. It is designed and implemented to provide a robust and high performance set of tools that can be used to limit the need for software development. This presentation will revisit these fundamentals as they relate to the current state of EPICS.

        Speaker: Bob Dalesio
    • 12:30 14:00
      Lunch break 1h 30m

      Meal not provided, there are many restaurants and cafes in the nearby area.

    • 14:00 14:30
      Lightning talks: L2 Adams Room

      Adams Room

      Palmer House Hilton

      17 East Monroe Street Chicago, IL 60603, USA
      Convener: Thomas Fors (Argonne National Lab)
      • 14:00
        CSS to Phoebus Transition at KIT 5m

        Control System Studio has been in use at KIT for over 10 years. After a long preparation period and a technical overhaul of the build and deployment systems, we have now started to actively use Phoebus. While the main paper is submitted to ICALEPCS, this Lightning Talk will give a short summary.

        Speaker: Edmund Blomley (Karlsruhe Institute of Technology)
      • 14:05
        Archiver Appliance at APS Accelerator: Overview and Practices 5m

        The Archiver Appliance at the APS accelerator records, stores, and manages process variable data to support operations and machine studies. This talk provides an overview of the deployment, including the process variables monitored, CPU and memory usage, storage rates, and hardware setup. We describe how sampling specifications (rate and method) are determined based on process variable type, update rate, and size, and how Python scripts are used to manage and retrieve process variables. Practical examples illustrate how archived data supports monitoring, troubleshooting, and accelerator studies.

        Speaker: Lingran Xiao (Argonne National Laboratory)
      • 14:10
        Java implementation of secure PV Access and CA-to-PVA gateway 5m

        Present a new project to bridge EPICS CA and PVA protocols in the effort to support testing and evaluating the Secure EPICS.

        Speaker: Klemen Vodopivec (ORNL/SNS)
      • 14:15
        A Phoebus Client for the Bluesky Queue Server 5m

        We introduce a native Phoebus client for orchestrating Bluesky experiments via the Queue Server. The client supports viewing/editing the plan queue, submitting validated plans, starting/pausing/stopping execution, opening/closing the run-engine environment, and monitoring status—all from within Phoebus.

        Speaker: Kunal Shroff
      • 14:20
        Phoebus PVWS Data Source: WebSocket access to EPICS PVs 5m

        We present a new Phoebus data source that speaks directly to PVWS (PV Web Socket) endpoints, enabling low-latency, firewall-friendly access to EPICS Process Variables. PVWS bridges Channel Access and PVAccess to WebSockets and transmits values plus metadata (units, limits, severity, timestamps), sending full metadata once and then only deltas—reducing bandwidth. Integrating this datasource in Phoebus lets any applications (e.g., Display Builder, Data Browser) to subscribe, write (when permitted), and operate seamlessly against control systems remotely via standard web infrastructure.

        Speaker: Kunal Shroff
      • 14:25
        Collaboration update on the Phoebus Tools and Services Technology Stack 5m

        The Phoebus 5.X releases build on the existing framework by expanding the functionality of core modules and services, introducing new user-facing applications, and updating the technology stack to improve scalability, security, and long-term maintainability. The collaboration has also continued to broaden, with contributions from additional facilities, supporting Phoebus’s role as a sustainable and adaptable platform for control system applications.

        Speaker: Kunal Shroff
    • 14:30 15:30
      Standard talks: T3 Adams Room

      Adams Room

      Palmer House Hilton

      17 East Monroe Street Chicago, IL 60603, USA
      Convener: Thomas Fors (Argonne National Lab)
      • 14:30
        Leveraging MRF system for Fast Beam Interlock applications 20m

        The distributed architecture of MRF timing systems provides a fitting platform for implementing interlock functions with low latency. In the Fast Beam Interlock (FBI) system developed for Nusano, input statuses from Event Receivers (EVRs) are transmitted to the Event Master (EVM), combined into logical flags and propagated to EVR outputs.
        This enables fast and configurable propagation of critical signals across the accelerator. Because timing hardware is already near important equipment and connected via fiber optics, the approach minimizes additional infrastructure and leverages the existing MRF network. Upgrades to FPGA firmware and EPICS software extend the system to provide the required interlock functionality.

        Speaker: Luka Perusko (Cosylab)
      • 14:50
        PVmapper: EPICS nameserver without a database 20m

        At SNS we have developed a new nameserver that builds the PV cache dynamically by searching for PVs on the network and storing them until IOCs are alive, without the need to pre-load or keeps a database of PVs. This project was started to support the specifics of the highly diverse EPICS environment with many flavors of Channel Access clients/servers. The PVmapper was implemented to allow fine-tuning of the EPCIS broadcast traffic on the network.

        Speaker: Klemen Vodopivec (ORNL/SNS)
      • 15:10
        Integration of the EcosimPro CHL Model into EPICS using Python's OPC UA Library 20m

        A dynamic simulation model of the Central Helium Liquefier (CHL) at the Spallation Neutron Source (SNS) has been developed using the EcosimPro commercial software. This model facilitates production system replication providing virtual environments for software testing, operator training, process analysis and troubleshooting. In this integration project, the EcosimPro model will be deployed as a standalone OPC UA server. For proof of concept, a section of the model was successfully compiled and executed on both Windows and Linux OS. In addition, a standalone Python class utilizing the opcua library has been defined and tested to interact with the OPC UA server. Consequently, an EPICS softIOC with the PyDevice support module was successfully created to communicate with the test model. Progress and status of the initiative including future plans will be presented.

        Speaker: Marnelli Martinez (Spallation Neutron Source)
    • 15:30 16:00
      Tea break 30m
    • 16:00 16:30
      Lightning talks: L3 Adams Room

      Adams Room

      Palmer House Hilton

      17 East Monroe Street Chicago, IL 60603, USA
      Convener: Suyin Wang (Argonne National Lab)
      • 16:00
        The EPICS Training-VM - a scripted, reproducible, modular approach 5m

        Since 2023, the new Virtual Machine for EPICS Training (Training-VM) uses a collaborative approach based on Git branches/submodules, Vagrant building and Ansible-scripted installation.
        The VM is available in different Linux flavors, with a scripted and reproducible build. Instances can be easily updated from GitHub, allowing per-event configuration and interesting off-label applications.

        Speaker: Mr Ralph Lange (ITER Organization)
      • 16:05
        epics-in-docker: a small framework for building slim IOC and EPICS tooling container images 5m

        The SIRIUS accelerators have used containers for IOCs for years, but build definitions and launch scripts were often duplicated, and image sizes could be over 3GB. On the other hand, the SIRIUS beamlines, until recently, used IOCs installed in a shared NFS, which complicated application management, especially across different OS versions.

        To address these issues, we have developed a framework for building slim IOC container images (e.g. ADAravis takes 300MB) using a curated set of dependencies (and their versions) and simple and short build definitions. We avoid duplicating shared information by using git submodules, which aids in versioning the base images used. The resulting container images include a standard set of installed packages and scripts, making them ready for deployment in a wide range of container orchestration setups. The shared interface provided by the EPICS build system allows us to also create images with EPICS tools, including CA and PVA gateways and epics-base utilities.

        For beamlines, it was necessary to adapt the IOC orchestration to also support containerized applications, keeping the same user interface for managing IOCs for beamline and support staff.

        This presentation aims to highlight some aspects of the epics-in-docker architecture, the user experience, and how SIRIUS manages containers. It also aims to present an overview of the tradeoffs made in epics-in-docker and other frameworks, such as our choice to not support different versions of dependencies.

        Speaker: Guilherme Rodrigues de Lima
      • 16:10
        An OPC UA Server for EPICS pvAccess 5m

        The proposed OPC UA server for EPICS pvAccess will enable EPICS-based systems to connect to a variety of external frameworks and tools. It will facilitate communication with not only SCADA frameworks but also any system with an OPC UA client, including HMIs, Remote Handling systems, and Virtual Reality tools, among others.
        This session will feature a short demo showing:
        - The gateway's web interface and some of its features
        - An example of integration with an industrial SCADA

        Speaker: Javier Cruz Miranda (University of Granada)
      • 16:15
        PVs in Python 5m

        We demonstrate creating PVs using the p4p library. Tools developed at ISIS to make this easier for developers are demonstrated, showing an example of how existing Python code may be easily adapted to add an EPICS pvAccess interface.

        Speaker: Ivan Finch
      • 16:20
        EPICS diffractometer control with HKL calculations 5m

        An EPICS Input/Output Controller (IOC) for HKL crystallography calculations and diffractometer control is introduced. The IOC brings real-time, bidirectional crystal diffraction computations into seamless beamline control. This IOC, built using PyDevice, bridges efficient C‑based core calculations with flexible Python bindings, providing powerful forward and inverse crystallographic transformations (real-space motor rotations to reciprocal-space Miller indices, and vice-versa) across diverse diffractometer geometries—including 4‑circle, 6‑circle, and kappa setups. The interface is delivered through a Phoebus CSS GUI, enabling users to define lattice constants, compute UB matrices, refine alignment, and perform scan planning in reciprocal space.

        https://github.com/hkl-projects/ioc-hkl
        https://repo.or.cz/hkl.git
        https://github.com/klemenv/PyDevice
        https://github.com/ControlSystemStudio/phoebus

        Speaker: Alexander Baekey
      • 16:25
        Tests on Integration of Industrial and Scientific Control Systems for IFMIF-DONES 5m

        As part of the EUROFUSION consortium, S2Innovation has been actively involved in the development and validation of control system solutions for the IFMIF-DONES project, addressing one of the most critical challenges in large-scale scientific facilities: bridging the gap between open-source research frameworks and industrial-grade safety systems.
        The presentation will highlight three completed tasks:
        1. Hybrid EPICS + WinCC OA integration – development of a prototype Bridge ensuring bidirectional communication with <30 ms latency and >99.5% reliability, demonstrating that hybrid research–industry architectures can meet Machine Protection System (MPS) requirements.
        2. PLC–EPICS IOC communication tests – systematic comparison of communication protocols (OPC UA, s7plc, s7nodave) showing that while OPC UA offers interoperability, only direct S7nodave-based communication on modern PLCs consistently meets safety latency requirements.
        3. OPC UA pilot implementation – building a testbed replicating CODAC conditions, proving that IPC hardware with integrated S7-1500 dramatically reduces latency to ~15 ms, while standard S7-1200 PLCs remain unsuitable for critical paths.
        Key outcomes: validated guidelines for safe and reliable communication architectures in DONES, reduction of technological risk for future fusion projects (ITER, DEMO), and contribution to international best practices in control system integration.

        Speaker: Wojciech Soroka (S2Innovation Sp. z o.o.)
    • 16:30 16:50
      Standard talks: T4 Adams Room

      Adams Room

      Palmer House Hilton

      17 East Monroe Street Chicago, IL 60603, USA
      Convener: Suyin Wang (Argonne National Lab)
      • 16:30
        EPICS Council Report 20m

        Report on recent. EPICS Council activities.

        Speaker: Karen White (Oak Ridge National Laboratory)
    • 16:50 17:00
      Valete: Wrap-up Adams Room

      Adams Room

      Palmer House Hilton

      17 East Monroe Street Chicago, IL 60603, USA
      Convener: Mr Andrew Johnson (Argonne National Laboratory)
      • 16:50
        Next Spring EPICS Meeting come to Saclay (Paris) ! 10m

        For the next EPICS meeting in Spring, the venue will be Saclay.

        Here is a quick summary of where the conference will take place, the location, the transportation to get there, and the best things to do around Saclay and in Paris during spring.
        Come and bring plenty of topics with you.

        Speaker: Alexis Gaget