2026 CAP Congress / Congrès de l'ACP 2026

America/Toronto
U. Ottawa - Learning Crossroads (CRX) Building

U. Ottawa - Learning Crossroads (CRX) Building

100 Louis-Pasteur Private, Ottawa, ON K1N 9N3
Description

Welcome to the CAP2026 Indico site. This site is being used for abstract submission and congress scheduling.

 

 

Bienvenue au site web Indico pour ACP2026. Ce site servira à la soumission de résumés et à la préparation de l'horaire.

 

    • Vacuum short course workshop - offered by Busch/Pfeiffer
    • 14:00
      Congress Registration | Inscription au Congrès
    • CAP Board, Advisory Council and CAP Reps Social Gathering | Réunion sociale du Conseil d'administration, du Conseil consultatif et des représentant(e)s de l'ACP
    • Student Event - Oral Presentation Workshop | Événement pour les étudiant(e)s - Atelier de présentation orale

      Get ready to ace your oral presentations with our comprehensive training session! This workshop is designed to help you enhance your public speaking skills, structure your content effectively, and handle audience questions with confidence. Whether you're a first-time presenter or looking to refine your delivery, this session offers valuable tips and practice opportunities to ensure you present with clarity and impact. Don’t miss out on this chance to level up your presentation skills ahead of the Congress!

    • 18:00
      Congress Registration | Inscription au Congrès
    • Student Icebreaker Event | Événement brise-glace pour les étudiant(e)s

      Kick off the Congress with an exciting Student Icebreaker! This is your chance to meet fellow students from diverse disciplines, exchange cultural insights, and create lasting connections. Participate in engaging activities and enjoy fun conversations in a relaxed atmosphere. It's the perfect opportunity to build friendships that will enrich your Congress experience. Don’t miss out on starting your Congress journey with new friends and shared experiences!

    • CAP Past Presidents' Working Dinner Meeting | Réunion et souper des ancien(ne)s président(e)s de l'ACP

      This is the CAP Past President's Dinner Meeting with the CAP Executive. You needed to pre-register when completing your Congress registration. Past Presidents who did not pre-register can still register for the dinner by emailing Bill Whelan (wwhelan@upei.ca), by Saturday, June 7.

      Convener: William Whelan
    • 07:30
      Congress Registration and Information (07h30-17h30) | Inscription au Congrès et information (07h30-17h30)
    • Congress Welcoming Remarks (08h40-09h00) | Ouverture du Congrès (08h40-09h00)
    • M-PLEN1 Plenary Session | Session plénière - John Donohue, IQC/U.Waterloo
      • 1
        Quantum for Educators and Young Students

        The explosion of public interest in quantum information technologies has provided an opportunity to interest more students in the ideas and applications of quantum mechanics. As research programs and institutes have grown in Canada, so have efforts to reach new communities and teach concepts in new and tactile ways. To allow high-school students to engage with quantum information science on a broad scale, it is essential to provide educators with the resources needed to understand and effectively communicate the topic to students. We have run the Quantum for Educators (QEd) workshop for ten years to address this need, providing hands-on activities and lesson plans designed for the classroom. To ensure broad and equitable access to introductory QIS education, such activities should be low-cost, easy to replicate, and intuitive, as well as connect to material present in the curriculum. In this talk, we will outline the QEd workshop and its approach to topics including quantum communication, quantum algorithms, and uncertainty, sharing survey feedback and lessons-learned from many iterations. We will also explore how enrichment programs can be developed for keen high-school students both in-person and virtually, as in the Quantum School for Young Students (QSYS) summer school which has run for 18 years, and how these connect to undergraduate-level programming like the 16-year-running Undergraduate School on Experimental Quantum Information Processing (USEQIP).

        Speaker: John Donohue
    • 09:45
      Health Break | Pause santé
    • (DPE) M1-1) Laboratory experiences | Expériences en laboratoire (DEP)
      • 2
        Teaching Students to be Scientists in a First-Year Physics Lab

        Introductory labs often provide students with highly prescriptive instructions, limiting the independence, autonomy, and creativity that students can exhibit during the experiment. When students enter their first physics lab, they also have varying levels of understanding of the concept of experimental uncertainty, how it arises, and how to interpret it.

        To address these challenges, the University of Toronto Department of Physics recently redesigned a first-year physics lab curriculum to emphasize experimental design, uncertainty, and data analysis. Instead of running a different experiment every week, students investigate a single experiment over multiple weeks, with data being collected in the first week and then analyzed it in the second. This new model encourages students to think like a scientist by deciding what data to collect, how to measure it, and how to understand their results.

        In this presentation, I will discuss the structure of the revised lab, share key lessons learned from its implementation, and comment on how it will be further improved in future semesters. I will also summarize the general trends from student feedback and reflect on how this new model shaped their understanding of experimentation.

        Speaker: Matthew Robbins (University of Toronto)
      • 3
        Soldering and Fun: Getting Students Started in Electronics With a Custom Printed Circuit Board

        The capability to construct custom electronics is crucial in many areas of experimental physics, but the associated skills and knowledge are not normally covered in regular classroom or laboratory instruction.

        We present a novel electronics teaching kit featuring a custom printed circuit board (PCB) that has been designed to introduce individuals of diverse technical backgrounds to fundamental electronic components and soldering practices. The kit, a class-D audio amplifier, largely features discrete electronic components and a minimal number of integrated circuits (LM393 comparator and TLC555 timer), striking a balance between circuit transparency and real-world functionality. To further promote transparency, the circuit schematic is printed on the front of the PCB, visually connecting most components in an easy to follow fashion. The amplifier itself is powered by USB-C with the analog audio signal being provided through a 3.5mm headphone jack ensuring compatibility with common electronic devices.

        A central goal of this project is to allow students agency to explore the world of electronics outside of a rigid, step-by-step set of instructions. To that end, a dedicated "custom development area" patterned after a traditional solderless breadboard has also been included on the PCB in addition to the core amplifier circuit. This feature enables participants to experiment with electronic principles and facilitates the integration of secondary audio sources and power supplies into the PCB. Furthermore, students are also encouraged to design and fabricate a custom enclosure for the included speaker through 3D printing, integrating electronics with computer aided design (CAD) and additive manufacturing.

        This project is fully open-source, and the design files and documentation can be accessed on github. Fifty kits were distributed to University of Waterloo students spanning a wide range of academic and technical backgrounds. We will report on this trial and possible extensions to this project.

        Speaker: Mr Zekun Hao (University of Waterloo)
      • 4
        1st Year Physics labs: Engaging Students During Labs

        As an active member of the 1st year Physics lab community in the department of Physics and Astronomy at York University, spanning a time from the late 1990’s to the present, I have observed a variety of changes in the laboratory experience. Over this time universities have navigated through a pandemic and rapid technological change. Keeping pace with these changes continues to be a challenge. With the advent of AI, keeping students engaged during the labs and facilitating active learning is a constantly evolving and ongoing process. One of the biggest challenges is student understanding of preparation for an upcoming lab. This generally involves the process of preparing for a lab in advance by reading the laboratory manual and completing a set of prelab exercises or a prelab quiz. Another challenge is engaging student critical thinking skills by guiding students to use the experiment at hand during a lab to answer questions instead of asking AI platforms, such as Chat GPT or a search engine. Additionally, the role of the TA or instructor leading the laboratory session can have a strong affect on the learning experience of the students. During this presentation I will share some of the changes and challenges I have experienced over the years and what has been implemented in the 1st year physics labs at the new Markham Campus of York University.

        Speaker: Gloria Orchard (York University)
      • 5
        Emulated Gravitational Lensing in the Undergraduate Laboratory

        General relativity predicts the deflection of light by the gravitational field of massive bodies. This deflection can result in the distortion and magnification of images of distant celestial objects, which is a phenomenon called gravitational lensing. In the work presented here, acrylic disks are machined into lenses with a curvature that deflects light in much the same way as a gravitational lens. This emulated gravitational lensing results in images of Einstein rings and arcs as in strong gravitational lensing, tangential shear of background images as in weak gravitational lensing, and simulates the increased brightness of a background object as in gravitational microlensing. The emulated mass of the lens is determined from the lensed images in each gravitational lensing regime. These mass measurements agree with each other and with the expected emulated mass based on the machined curvature of the lens. Systematic effects are studied and results are compared using various distance scales and lenses.

        Speaker: Daniel FitzGreen (McMaster University)
      • 6
        Lessons learned – a graduate student’s reflections on standing on the other side of the classroom

        A common feature of many physics graduate programs is taking on roles as teaching assistants (TAs). The responsibilities and time commitments of these positions come with considerable variance, from relatively passive tasks, such as grading, to fully student-facing roles of teaching. In their more student-facing roles, TA positions can be influential in student growth and experience. Indeed, in many cases, the opportunities afforded by acting as a TA represent formative exposure to physics education and mentorship. However, while these responsibilities are often a net positive on the educational experience of graduate students, they are not without challenges. The most ubiquitous of challenges comes from the, at times, considerable contribution of TA duties to graduate student workload over the course of their programs. This is particularly true for students who enjoy the more education and science communication driven aspects, and struggle to balance those interests with an already demanding research schedule. Moreover, for many graduate students, their role as a TA is made harder by language barriers. Here, as a graduate student at the end of my PhD program, I hope to share my own experiences in navigating student interaction and instruction in a second language and balancing research and TA commitments.

        Speaker: Daniel Trotter
    • (DAMOPC) M1-10 | (DPAMPC)
      • 7
        Watching a molecular bond break

        Polarization and correlation effects in molecular photodynamics are intimately tied to the coupled motion of electrons and nuclei. Understanding these interactions on their natural femtosecond timescales requires measurements that resolve both electronic structure and nuclear configuration simultaneously. Here, we present a complete imaging study of the UV-initiated dissociation of Br₂ using a femtosecond pump–probe scheme combined with COLTRIMS-based electron–ion coincidence detection [1]. Using 400 nm pump and 800 nm probe femtosecond pulses, we track the time-resolved evolution of molecular fragmentation through both electronic and structural observables. Our experiment provides a direct, molecular-frame view of how electron correlation, orbital coherence, and ionic polarization influence dissociation dynamics on the few-femtosecond scale.

        The initial 400 nm excitation populates the dissociative C-state of Br₂, launching a neutral dissociation process [2]. A delayed 800 nm pulse ionizes the evolving fragments at controlled delay times, allowing us to correlate the kinetic energy release (KER) with photoelectron momentum distributions. The use of COLTRIMS enables full three-dimensional momentum imaging in coincidence, revealing both ionization dynamics and internuclear distance evolution with sub-cycle temporal resolution [3].

        The measured photoelectron distributions show striking delay-dependent features, including angular asymmetries, ATI ring modulation, and high-momentum enhancements that encode coherence and polarization effects tied to the evolving charge environment. These features reflect the progressive change in parent-ion polarizability and the transition from multi-center (molecular) to single-center (atomic) scattering regimes. Semiclassical two-step (SCTS) modeling and time-dependent density functional theory (TDDFT) simulations confirm that tunnel ionization probes a superposition of molecular and atomic polarization channels whose relative contributions evolve during bond breaking.

        We find an approximately 50 fs temporal offset between the completion of electronic orbital reconfiguration and structural dissociation, indicating that electron dynamics precede and drive nuclear rearrangement. This decoupling of electronic and nuclear observables is a hallmark of correlated dynamics and emphasizes the need for complete measurements, specifically ion–electron coincidence in the molecular frame, to determine true reaction timescales.

        Our results demonstrate how ultrafast correlation, ionic polarization, and multielectron coherence shape molecular dissociation and strong-field ionization pathways. This study exemplifies the power of multi-observable approaches for uncovering coherence and correlation in the interaction of intense laser fields with complex molecular systems.

        References
        [1] T. Wang et al., manuscript submitted (2026)
        [2] W. Li et al., 2010, PNAS 107, 20219
        [3] J. Ullrich et al., 2003, Rep. Prog. Phys. 66, 1463

        Speaker: Dr Nida Haram (Joint Attosecond Science Lab, National Research Council and University of Ottawa, ON K1A 0R6, Canada)
      • 8
        Quantum Dial for High-Harmonic Generation

        High-harmonic generation (HHG) is a highly nonlinear optical process that typically requires an intense laser to trigger emissions at integer multiples of the driving field frequency. However, the strong fields required for conventional HHG inevitably perturb the system, limiting its use as a nondestructive spectroscopic probe. Recent advances in bright squeezed vacuum (BSV) sources have created opportunities to drive HHG with quantum fields alone. In this work, we demonstrate a regime in which the light-matter interactions can be controlled and tuned using a weak classical field, whose pulse energy is two orders of magnitude lower than that in standard HHG-perturbed by an even weaker quantum field, such as BSV. This approach opens new avenues for nonlinear spectroscopy of materials while substantially suppressing strong laser-induced damage, distortions, and heating. We show that a BSV pulse containing less than 5% of the classical driving energy can act as an 'optical dial', allowing tuning of the nonlinear emission spectrum, emission angular dependence, and ionization.

        Speaker: Lu Wang (University of Ottawa)
      • 9
        XUV generation with toroidal pulse

        When an electron interacts with an intense laser field, it is driven to relativistic velocities and emits radiation at integer multiples of the laser frequency (harmonics), thereby upconverting optical light into the extreme-ultraviolet or beyond. This process, known as the relativistic nonlinear Thomson scattering, is governed by the combined action of the electric and magnetic components of the laser field, both of which play a crucial role in determining the emitted radiation. Prior investigations of nonlinear Thomson scattering have considered a variety of electromagnetic beam configurations, including linearly, circularly, radially, and azimuthally polarized beams, as well as vortex beams.

        Here, we consider relativistic emission driven by a toroidal pulse—an exact solution of Maxwell’s equations with unique properties. In contrast to Gaussian pulses, its spatial and temporal structures are intrinsically coupled and cannot be factorized into independent spatial and temporal profiles. The toroidal pulse exhibits a distinct toroidal topology: the magnetic field forms a doughnut-like structure around the propagation axis, while the electric field wraps along its surface, leading to a pronounced longitudinal component aligned with the direction of propagation. Thus, the toroidal pulse is also known as the flying electromagnetic doughnut. At the focal center, the toroidal pulse is characterized by a tightly localized, single-cycle electric field.

        Our results suggest that at substantially lower pulse energies than those required for conventional Gaussian drivers, a toroidal pulse with peak field strength $E_0=10^{12}\,\text{V/m}$ ($a_0\approx0.2$) already leads to strongly directional radiation up to 150 eV, which corresponds to 100th harmonics of a $0.8$\textmu m laser. Our research offers new alternatives to UV and X-ray sources, high-harmonic generation, and advanced light–matter interaction studies.

        Speaker: Lu Wang (University of Ottawa)
      • 10
        Suppression of Amplified Spontaneous Emission in Fe:ZnSe Amplifier

        We present a scheme to suppress Amplified Spontaneous Emission (ASE) in a high-gain Fe:ZnSe chirped pulse amplifier (CPA) by increasing the seed energy from sub-µJ to sub-mJ level. The high-energy seed pulses are generated using a broadband optical parametric chirped pulse amplifier based on ZnGeP2 (ZGP) pumped by a Ho:YLF CPA.

        Shifting the driving wavelength in ultra-fast physics, e.g. High Harmonic Generation (HHG) to the mid-infrared (MIR) brings various benefits and will be a vital step to elevate experiments to the next level. While HHG driven by Ti:Sapphire lasers can reach cutoff energies of up to 130 eV, HHG driven by a 4 µm laser could reach energies above 1 keV. This would enable attosecond transient absorption spectroscopy experiments on vital elements such as 3d transition metals, Fe, Co, and Ni.
        An essential step for this is generation of short, high-energetic laser pulses in the MIR. Fe:ZnSe is a promising candidate to become the workhorse of MIR pulsed lasers. It has many similarities to Ti:Sapphire, including a broadband emission bandwidth. In the past, we showed a Fe:ZnSe CPA capable of delivering up to 4.33 mJ. The total energy of this system was limited by ASE.
        By preamplifying the seed using a ZGP OPCPA, we can reduce the ASE in the amplifier. The OPCPA can increase the energy from sub µJ to >100 µJ with very low optical parametric generation It will lead to significantly reduced ASE in the Fe:ZnSe CPA, which will enable a further increase of its output energy- and will allow the addition of a second amplification stage.

        Speaker: Mr Jon Morten Drees (University of Ottawa, Canada)
      • 11
        Polarization-Controlled On-Axis Phase Matching in Extreme Four-wave Mixing

        Three photons interacting in a Kerr nonlinear medium to produce a fourth photon, called four-wave mixing (FWM), has found a wide range of applications such as parametric amplification and quantum optics. Because the gain scales with the pump intensity, ultrafast pump lasers enable FWM at extreme intensities, leading to enormous amplification. However, within scalar or co-polarized models of the Kerr nonlinearity, on-axis parametric gain is absent, and Kerr-driven instabilities rely on phase matching that favours off-axis modes through transverse momentum conservation. Non-collinear propagation reduces the pump-seed interaction length, placing an upper-bound on the achievable amplification.
        Here we show that this picture changes when the seed field is orthogonally polarized with respect to the pump. We demonstrate that although orthogonal polarization reduces the effective gain, it allows on-axis phase matched parametric amplification in an otherwise isotropic Kerr medium, enabling significantly longer interaction lengths. In this regime, birefringence offers further control over the phase matched gain bandwidth.
        The existence of on-axis gain has important consequences for the stability of the pump itself. In particular, it implies that the pump becomes unstable to on-axis perturbations in the orthogonal polarization, even in the absence of an external seed. At sufficiently high intensities and interaction lengths, this instability should manifest as on-axis spontaneous FWM or polarization-enabled modulation instability. These results identify polarization as a powerful control parameter for Kerr-instability dynamics and revise the standard understanding of phase matching in extreme four-wave mixing.

        Speaker: TJ Hammond (University of Windsor)
      • 12
        High Harmonic Generation from a Noble Metal

        High harmonic generation (HHG) – the signature of strong field physics – has been demonstrated in transparent solids. The underlying mechanisms in solids are commonly classified as interband emission, arising from laser-driven electron–hole recombination, and intraband emission, originating from nonlinear carrier motion within non-parabolic bands. In contrast, HHG from metals has remained unexplored.
        In metals, due to the poor penetration of light, the electrons experience a much weaker field than the incident laser field. Hence, HHG from metals has remained largely unexplored. Recently, observation of HHG in plasmonic titanium nitride (TiN) suggested intraband-dominated emission occurs at laser intensities close to the damage threshold, opening the question of whether and how harmonics can be generated in conventional metals. Here, we report vacuum ultraviolet HHG from epitaxial thin films and bulk single-crystal silver, a noble metal.
        In the experiment, few-cycle near-infrared (NIR) pulses centered at 780 nm are focused onto 65 nm thick epitaxial Ag films in reflection geometry, and the emitted harmonics are detected in the far field. Bright harmonics up to the 11th order, corresponding to photon energies near 19 eV, are observed. Under identical illumination conditions, films deposited on silicon withstand intensities of 30 TW/cm², surpassing the damage threshold of TiN before lattice modification occurs. Our theoretical analysis indicates that, at near-damage intensity, the dominant contribution to harmonic emission in silver arises from transitions involving d-electrons.
        We further investigate polarization and crystal orientation effects. Using circularly polarized NIR field, efficient harmonics are generated in accordance with space–time symmetry selection rules, and the harmonic yield exhibits a six-fold rotational symmetry reflecting the Ag (111) crystal structure.
        These results demonstrate that HHG is a universal response of metals driven near their damage threshold and extend ultrafast strong-field spectroscopy into the metallic regime, where lattice order and plasma formation intersect. HHG from metals is promising in the development of integrated VUV photonics. Metals pumped with conventional laser sources can now fill all roles from the source of the VUV to heat sinking the nonlinear material to a thermal bath.

        Speaker: Shima Gholam Mirzaeimoghadar (University of Ottawa)
      • 13
        Modelling radiation-balanced solar pumped lasers with a quantum optical framework

        The physics of solar pumped and radiation-balanced lasers lies in the intersection between solar pumped solid-state materials and the anti-Stokes optical cooling of solids. A solar pumped laser utilizes filtered sunlight directly to pump a gain medium and achieve population inversion for lasing output. Experiments around creating functional solar lasers have been conducted for around two decades now and solar lasers can produce powers in the tens of Watts. Typical solar pumped lasers will utilize a foil to focus around a $1$ $m^2$ area of sunlight onto a rare-earth-doped crystal acting as a gain medium. Depending on the time of day and location on Earth, the solar irradiance is on the order of $600-1000$ $W/m^2$. Furthermore, only a fraction of the solar spectrum can act as a pump for the gain medium. The other important constraint and challenge is to manage the thermal load on the gain medium, knowing that temperature directly affects the lasing threshold and efficiency.

        We explore the modelling and feasibility of designing a radiation-balanced solar pumped laser using a common solid-state laser gain media consisting of doped Yttrium Aluminium Garnet (YAG) with Ytterbium (Yb:YAG). Ion doped gain media can be modelled with a fully quantum optical framework because the embedded ions are not interacting with each other and their contributions can be summed. We design the solar pumping and lasing transitions to optimize for additional anti-Stokes cooling transitions to occur efficiently. A fully quantum model helps to elucidate how the microscopic energy exchanges within the gain media, the laser cavity, and the incoherent solar pump are scaled to the macroscopic laser output. Such a design would allow for self-sufficient laser operation in which solar irradiance would both pump and cool the gain media to operate at ideal conditions and maximize laser power output; we aim to demonstrate the feasibility of solar pumped lasers for renewable energy applications.

        Speaker: Ahmed Jaber (University of Ottawa)
    • (DCMMP) M1-11 | (DPMCM)
    • (DQI) M1-2 | (DIQ)
      • 14
        On the power of multipartite entanglement for pseudotelepathy

        As early as 1935, Schrödinger recognized entanglement as “not one, but the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought”. Indeed, most remarkable phenomena in quantum information science, such as quantum computing and quantum teleportation, spring from clever uses of entanglement. Among them, pseudotelepathy enables two or more players to win systematically at some cooperative games with no need for communication between them, a restriction that would make the task impossible in a classical world. We investigate the power of multipartite entanglement for pseudotelepathy. Some known games that can be won with tripartite entanglement cannot be won with bipartite entanglement, but they can be won with bipartite non-signalling resources such as the so-called Popescu-Rohrlich nonlocal box. We exhibit a five-player game that can be won with tripartite entanglement, but not with arbitrary bipartite non-signalling resources even in the presence of arbitrary five-partite classical resources. This illustrates both the power of bipartite non-signalling resources (over bipartite entanglement) and the even superior power of tripartite entanglement.

        Speaker: Prof. Xavier Coiteux-Roy (University of Calgary)
      • 15
        From Fast‑Scrambling Black Holes to Cosmological Complexity: Quantum Information Tools for Gravity

        Quantum information provides new ways of thinking about gravity. I’ll first describe our work on black holes viewed as fast “scramblers” of information: if a black hole mixes its internal information very quickly, then someone falling in would encounter a high-energy barrier (a “firewall”) almost immediately, rather than passing smoothly through the horizon as Einstein’s theory suggests, and that every astrophysical black hole in the universe will already have a fully developed firewall. Turning to cosmology, we use a simple measure of how “complicated” the tiny patterns of matter and energy become as the Universe expands. During the rapid growth phase of the early Universe, this complexity increases steadily, but in later eras it drops and eventually stops changing. We also find a maximum rate at which this complexity can grow, with the inflationary period reaching that limit. These results show how ideas from information theory can shed light on both black holes and cosmic evolution.

        References:

        A. Bhattacharyya, S. Das, S. Haque, B. Underwood, Phys. Rev. D 101, 106020 (2020); Phys. Rev. Research 2, 033273 (2020).
        Z-W. Wang, S Das, S. L. Braunstein, arXiv:2206.02053.

        Speaker: Prof. Saurya Das (University of Lethbridge)
      • 16
        Additivity and Nonadditivity of Quantum Channel Capacities

        A central goal of quantum information theory is to determine the capacities of a quantum channel for sending different sorts of information. I’ll highlight the new and fundamentally quantum aspects that arise in quantum information theory compared to the classical theory. These include the central role of entanglement, nonadditivity, and synergies between resources. I will also discuss some challenging open questions that we will have to solve to push the theory forward.

        Speaker: Prof. Graeme Smith (University of Waterloo)
      • 17
        Security of quantum position-verification

        In this talk I will review recent progress on understanding the security of quantum position-verification, a proposal to use quantum communication to verify the location in space of a party remotely. In particular I will review the $f$-routing proposal and lower bounds on the entanglement or complexity cost to cheat in this scheme. Based on https://arxiv.org/abs/2402.18648, https://arxiv.org/abs/2402.18647 and upcoming work.

        Speaker: Alex May (Perimeter Institute for Theoretical Physics)
    • (DAMOPC) M1-3 | (DPAMPC)
      • 18
        Enhanced THz gas spectroscopy using hollow-core THz fibre.

        Terahertz spectroscopy provides a powerful means of identifying gas-phase molecules through their characteristic spectral signatures. However, several technical hurdles still limit sensitive monitoring and high-frequency resolution, both of which are essential for distinguishing different gas species. While increasing the light-sample interaction length is beneficial, the relatively large beam size and divergence of terahertz radiation require a sizeable gas enclosure to substantially increase the interaction length. This configuration not only restricts the development of compact systems, but it also becomes a fundamental limitation for applications where only a small amount of the analyte gas is available.
        In this work, we introduce a fibre-based terahertz gas spectroscopy platform that addresses these limitations by combining an anti-resonant hollow-core terahertz fibre with terahertz time-domain spectroscopy. We demonstrate a hollow core photonic crystal fibre that guides the terahertz field while simultaneously serving as a gas cell with an optimized geometry that minimizes the required sample volume. The fibre confines the radiation within its core by anti-resonant reflection, setting the effective interaction length between the field and the gas to 10 cm, which is the fibre length. The setup relies on broadband terahertz pulses generated through optical rectification in a GaP crystal and coupled into the fibre using free-space optics, while the transmitted THz field is detected by electro-optic sampling. This fibre-based architecture offers a compact alternative to conventional free-space gas cells while supporting broadband molecular spectroscopy. In brief, this work explores hollow-core terahertz fibres as a viable platform for THz gas spectroscopy, enabling applications ranging from toxic gas detection to non-invasive, breath-based diagnostics through waveguide-enhanced light-matter interaction.

        Speaker: SUBHROJYOTI BHATTACHARYA (University of Ottawa)
      • 19
        Real-time pump-probe characterization of laser-induced glucose melting.

        Organic materials are the building blocks of life. From simple molecules such as glucose to complex proteins and DNA, these organic compounds drive essential processes that make life possible. Many of these processes involve thermally induced phase transitions - rapid structural or chemical changes occurring on very short timescales, many ranging from milliseconds to microseconds. The terahertz (THz) spectral window overlaps with the vibrational energies of many of these organic materials and can be used to investigate collective molecular motions, allowing changes in molecular structure and chemistry to be closely monitored.
        Glucose, a key organic compound, serves as a prime example, as it exhibits a sharp THz resonance indicative of its crystalline structure and features an intriguing temperature-induced melting process eventually leading to caramelization. To investigate this transition, we employ real-time single-pulse THz spectroscopy, a technique capable of capturing rapid, irreversible chemical changes with microsecond resolution. In this study, we monitor the real-time melting dynamics of glucose using both visible-light transmission and single-pulse THz monitoring in a pump-probe configuration, resolving individual pulses at the laser’s repetition rate. Simultaneous high-speed imaging provides spatially resolved snapshots of the transition from microscopic crystals to liquid.
        We present insights into temperature-induced phase transition in glucose on a sub-millisecond timescale. Such investigations will further advance our understanding of ultrafast irreversible physical and chemical processes in organic materials and, more broadly, biological systems.

        Speaker: SUBHROJYOTI BHATTACHARYA (University of Ottawa)
      • 20
        Nonlinear guided-wave terahertz generation and detection for high-speed communications

        We demonstrate compact terahertz (THz) components that combine a photonic crystal waveguide with a plasmonic bull’s-eye coupler. The waveguide, formed by a pattern of laser-written holes that extend through the thickness of a GaP window, allows efficient THz generation through optical rectification of a co-propagating near-infrared pulse. A similar waveguiding structure allows sensitive guided-wave detection via electro-optic sampling. These THz components address key limitations of free-space THz systems, including phase mismatch, short interaction lengths, weak mode overlap, and alignment sensitivity.

        The waveguide is engineered to tailor the effective refractive index of the THz mode, achieving velocity matching with a co-propagating near-infrared pulse. This design increases optical-to-THz conversion efficiency by extending the coherence length beyond that of bulk GaP. The plasmonic bull’s-eye antenna further enhances performance by concentrating incident THz fields into a small interaction volume and coupling them efficiently into the guided mode.

        We demonstrate this concept with broadband and tunable THz generation and detection from 1.9 to 3.9 THz. Significant improvements are observed in comparison to results obtained with a bulk GaP crystal.

        All structures are fabricated directly in bulk GaP using femtosecond laser writing, enabling precise three-dimensional patterning at the micrometer scale. This scalable and versatile fabrication approach allows rapid prototyping while ensuring consistent, reliable device performance. The resulting platform provides a compact, high-performance solution for both THz generation and detection, with strong relevance for next-generation high-speed wireless communication systems.

        Speaker: HESAM HEYDARIAN (Department of Physics, University of Ottawa)
      • 21
        Terahertz spectroscopy over 25 THz bandwidth enabled by BNA crystals and a single-ring-fiber pulse compressor

        Terahertz time-domain spectroscopy (THz-TDS) is a non-invasive technique capable of probing the complex dielectric properties of a wide range of organic and inorganic materials. When combined with an ultrafast pump pulse, THz-TDS systems enable the investigation of nonequilibrium excitations in semiconductors, superconductors, topological insulators, Dirac and Weyl semimetals, heavy-fermion systems, and quantum spin liquids.
        Although significant progress has been made in extending the bandwidth of the THz-TDS over the past decades, systems capable of efficiently covering the frequency range between 5 and 15 THz remain scarce. Conventional THz generation and detection schemes based on difference-frequency mixing in second-order nonlinear semiconductor crystals are fundamentally limited in this range by strong phonon absorption. Alternative approaches, such as air-plasma and metallic spintronic THz sources, can access the 5-15 THz region but typically suffer from a relatively low generation efficiency.
        Emerging organic nonlinear crystals offer new opportunities for efficient THz generation and detection from near-infrared pulses due to their large optical nonlinear coefficients and favorable phase-matching conditions. We present an ultrabroadband THz-TDS system that uses organic BNA crystals for both THz generation via optical rectification and detection via conventional electro-optic sampling, combined with a tunable single-ring hollow-core photonic crystal fiber (HC-PCF) compressor to temporally shorten the near-infrared pulses from a commercial Yb:KGW ultrafast amplifier at 1030 nm. The resulting system spans 0.3 to 25.2 THz, with a significant portion of the spectrum falling within the “new THz gap” between 5 and 15 THz.
        This configuration extends the accessible spectral range beyond that previously achieved with Yb-based organic-crystal systems and offers a streamlined single-color, broadband THz-TDS platform. The presented system provides a robust tool for exploring phonon and polariton dynamics in two-dimensional and semiconductor systems, as well as excitonic phenomena in quantum materials.

        Speaker: Wei Cui (University of Ottawa)
      • 22
        Efficient THz‑to‑NIR upconversion in BNA for quantum-level THz detection

        Nonlinear frequency upconversion of terahertz (THz) radiation into the near infrared (NIR) or visible offers a highly sensitive approach to THz detection. This method is particularly appealing because it avoids the need for bulky cryogenic systems to suppress thermal background noise. Considerable effort has gone into developing nonlinear THz detection schemes, exploring a wide range of materials to improve both sensitivity and accessible bandwidth. Recent advances in organic crystals have been especially promising, since their large optical nonlinearity and favorable phase matching properties enable efficient THz to NIR upconversion at optical frequencies extending up to 25 THz.
        Here, we present an approach for nonlinear frequency upconversion detection around 10 THz using the organic crystal N-benzyl-2-methyl-4-nitroaniline (BNA). The upconverted photons resulting from the interaction of a THz pulse with a near-infrared pulse in the crystal are spectrally resolved with a monochromator and detected using a commercial silicon-based single-photon detector. A polarizer, spatial filter and a spectral filter also enhance the detection sensitivity towards the single-photon level. We can accurately estimate the detection sensitivity by calibrating the measured SFG photon counts against the THz photon number extracted from standard electro-optic sampling. We further characterize the optical losses and the detector quantum efficiency to obtain the internal conversion efficiency in the crystal. Initial measurements at 4 THz show that a single THz pulse containing 60 photons can be detected with more than 50% probability. At higher THz frequencies, improved conversion efficiency is expected to further enhance sensitivity. Measurements are underway to extend the characterization up to ~12 THz. This work provides a practical route to a room-temperature, single-photon-level upconversion detection scheme near 10 THz for THz quantum optics.

        Speaker: Aswin Vishnu Radhan (University of Ottawa)
      • 23
        THz high-harmonic generation in graphene-based materials: from engineered THG enhancement to symmetry-forbidden emission

        Terahertz (THz) nonlinear optics is an emerging platform for studying strong-field light-matter interactions at low photon energies[1]. Because THz photon energies are far below typical interband transition energies, nonlinear responses are governed primarily by THz-driven thermodynamic response of the background Dirac carriers. In this regime, intraband acceleration of residual carriers and carrier heating play central roles[2]. These processes shape carrier transport and nonlinear optical response, motivating both fundamental studies and device concepts for nonlinear THz frequency conversion. High-harmonic generation (HHG) is a direct route to such conversion, producing phase-coherent spectral components at integer multiples of the pump frequency and upconverting THz signals into higher-frequency bands relevant to ultrahigh-speed information and wireless communications.
        Graphene is a particularly attractive platform because its Dirac dispersion supports a strong intraband response and exceptionally large THz nonlinearities under ambient conditions3. Here, we use a table-top high-field THz platform with advanced THz spectral filtering to enable field-resolved characterization of nonlinear emission. Within this framework, we demonstrate enhanced third-harmonic generation (THG) by stacking graphene layers, applying electrostatic gating, and employing metasurface-assisted field engineering[3]. We then apply the same methodology to graphene oxide (GO), a chemically functionalized derivative of graphene, and observe THz HHG that includes phase-coherent spectral components at even harmonic positions. We investigate the origin of this signal by systematically ruling out incoherent, phase-random contributions and nonlinear responses from residual symmetry breaking. Overall, our results show that engineered graphene platforms can boost and tune THz nonlinearity, while GO can unlock non-perturbative pathways that broaden the accessible harmonic spectrum, supporting compact and reconfigurable THz components for frequency conversion and waveform engineering.
        References
        1. Hafez, H. A. et al. Adv. Opt. Mater. 8, (2020).
        2. Hafez, H. A. et al. Nature 561, 507–511 (2018).
        3. Maleki, A. et al. Light Sci. Appl. 14, 1–10 (2025).

        Speaker: Ali Maleki (University of Ottawa)
      • 24
        Towards fast and sensitive terahertz detection for high-speed wireless communications

        The continuous surge in internet usage and the growing demand for higher data-transfer rates are driving the exploration of new, unallocated frequency bands, pushing research toward higher frequencies in the terahertz (THz) range. THz frequencies offer exceptionally wide bandwidths capable of supporting data-transfer rates of the order of terabits per second. The realization of high-speed THz communications relies on advancements in three key areas: signal generation, propagation and detection. This work focuses on the fast and sensitive detection of THz signal. Although THz radiation is widely used in many areas including astronomy, imaging, security and non-destructive analysis, high-speed THz detectors still remain a challenge. Traditional THz time-domain spectroscopy (THz-TDS) detection techniques and thermal based detectors are not suitable for high-speed wireless communications, mainly due to their slow response times. To overcome this limitation, we exploit fast detectors operating in the near-infrared (NIR) region of the electromagnetic spectrum. Specifically, we take advantage of a nonlinear process called sum-frequency generation (SFG) in a nonlinear crystal. The SFG photons generated from the interaction between NIR and THz signals lie in the NIR region and preserve the spectral information of the THz radiation. A fast avalanche photodiode with rise and fall times of the order of a few hundred picoseconds is used to achieve sensitive detection and rapid acquisition of THz signals. Using optimized amplitude levels for pulse-amplitude modulation (PAM), we aim to achieve data transfer-rates of hundreds of gigabits per second, with the potential to reach terabits per second.

        Speaker: Eeswar Yalavarthi (University of Ottawa)
    • (DCMMP) M1-4 | (DPMCM)
    • (DNP) M1-5 Hadrons-I | Hadrons-I (DPN)
      • 25
        From Gluons to Quantum: Toward a Network-of-Networks Vision for the Future of QCD

        Quantum Chromodynamics (QCD) governs the structure of visible matter, yet its most pressing challenges, from dense gluonic dynamics to precision descriptions of hadrons and nuclei, increasingly demand coordinated efforts beyond traditional collaborations. The Inter-American Network of Networks of QCD Challenges (I.ANN QCD) connects communities across the Americas to address fundamental questions in strong-interaction physics through a network-of-networks approach.

        In this talk, I will highlight scientific themes within I.ANN QCD, including high-density gluon dynamics and ultraperipheral collisions as clean photon-induced probes connected to the future Electron–Ion Collider program. I will also discuss how advances in AI, quantum science, computing, and next-generation detector and accelerator technologies, together with industry partnerships, are creating shared challenges and opportunities across physics, with QCD both driving and benefiting from this broader transformation at the heart of matter.

        Speaker: Daniel Tapia Takaki (University of Kansas)
      • 26
        Probing Generalized Parton Distributions (GPDs) with the Solenoidal Large Intensity Device (SoLID)

        Generalized Parton Distributions (GPDs) have emerged as a powerful framework for exploring the internal structure of hadrons in terms of their partonic constituents. The proposed Solenoidal Large Intensity Device (SoLID) at JLab is well suited to the study of GPDs, with its unique combination of large kinematic coverage and high luminosity ($>10^{37}$/s/cm$^2$) capabilities. The SoLID GPD program is ambitious, including Deeply Virtual Compton Scattering (DVCS) and Deep Exclusive Meson Production (DEMP) with polarized targets, as well as Double DVCS (DDVCS) and Timelike Compton Scattering (TCS) at high luminosity. Together, these will provide a variety of GPD data that are unlikely to be obtained from any other facility. An overview of the proposed GPD program with SoLID will be presented.

        Speaker: Garth Huber (University of Regina)
      • 27
        EXOTIC TWO-PSEUDOSCALAR MESON DECAYS IN GLUEX

        Hadronic physics aims to understand the contribution of quarks, gluons, and their internal dynamics in the for- mation of hadrons. Quantum Chromodynamics predicts a number of bound states, including those having an explicit gluonic degree of freedom called hybrids, but only a few have been confirmed experimentally. The GlueX experiment at Jefferson Lab, USA, utilizes a linearly polarized photon beam of 8-9 GeV and a large solid-angle particle detector for hadron spectroscopy. Among the physics goals are the study of the hybrid meson spectrum and the search for mesons with exotic quantum numbers. Recently, the signatures of a predicted exotic isoscalar, η1 has been reported by the BESIII collaboration in the radiative decay of J/ψ → η1γ → ηη′γ. The production of a two pseudoscalar system, ηη′, is also allowed in the photon-induced interaction γp → ηη′p and can be reconstructed with the GlueX spectrom- eter, giving us an excellent opportunity to validate η1’s existence. An analysis of this two-meson system based on a statistics-limited GlueX data set did not provide a conclusive result. Recently, the data set has been increased by a factor of 2-3, which will facilitate this search. Preliminary studies for this channel will be presented.

        Speaker: Zisis Papandreou (University of Regina)
      • 28
        Rare eta decay eta->pi+pi-e+e- with the GlueX experiment at Jefferson Lab

        For over a century, physicists have developed a detailed framework describing the fundamental particles and their interactions. This framework, called the Standard Model (SM), successfully explains phenomena governed by the electromagnetic, strong, and weak nuclear forces. However, it cannot explain several major mysteries, including dark matter, dark energy, and why the universe does not contain equal parts of matter and antimatter. These gaps motivate the search for new physics beyond the Standard Model through searching for new sources of Charge-Parity (CP) violation.
        My research investigates one such path by studying the eta ($\eta$) meson, a short-lived particle created in high-energy collisions, through its rare decay $\eta \rightarrow \pi^+ \pi^- e^+ e^-$. This decay proceeds via a virtual photon, and measuring its polarization allows us to construct an asymmetry factor between the pion and electron decay planes. This becomes an extremely precise test for SM CP-violation that can provide strong evidence for Beyond Standard Model (BSM) theories.
        To carry out this work, I will use the newly upgraded GlueX detector at Jefferson Lab in Virginia. This detector can capture and study particles with precision, and it provides an ideal environment to search for effects that may reveal new sources of CP-violation, which is an essential ingredient needed to explain the observed imbalance between matter and antimatter.
        Because this decay has a branching fraction at the $10^{-4}$ level, I will generate detailed simulations and compare them with real data to separate signal from background noise. Machine learning tools will play a key role in identifying subtle patterns in the detector and improving particle identification. Techniques, progress in simulation, and data analysis will be shown.

        Speaker: Akshay Ramasubramanian (University of Regina)
    • (DTP) M1-6 | (DPT)
      • 29
        Measuring a bulk's geometry from the outside

        In a spacetime with asymptotically anti-de-Sitter boundaries, localized events in the bulk produce characteristic signals at boundary locations that are lightlike from the event. I will describe (thought) experiments that use these signals to measure the geometry and scattering amplitudes of processes happening inside the bulk, and discuss exterior signatures of bulk causality and local dynamics. Based on 2501.13182, 2502.14963.

        Speaker: Simon Caron-Huot (McGill University)
      • 30
        On sufficient conditions for holographic scattering

        Holography implies scattering in the bulk can be mediated by entanglement on the boundary. The connected wedge theorem (CWT) of May, Penington, and Sorce is a concrete example where bulk scattering implies correlation between certain boundary regions. However the converse does not hold. We investigate a recent proposal of Leutheusser and Liu for a generalization of the CWT with converse. We prove the forward direction: having pairs of CFT "input" (and likewise "output") regions in a phase with connected entanglement wedge implies that a particular bulk subregion (the intersection of "input" and "output" entanglement wedges) is non-empty. We then establish a modified version of the proposal which has a converse, and identify counter-examples to the stronger conjecture.

        Speaker: Chris Waddell (Perimeter Institute)
      • 31
        Towards a three-dimensional holographic picture of the pion

        In light-front holography, the transverse dynamics in the pion is described by the equation for scalar string modes in $AdS_5$ with a quadratic dilaton (soft-wall), resulting in a massless pion. To generate the physical pion mass and describe the low energy pion data, longitudinal dynamics must be considered. In a recent paper, Phys.Rev.D 111 (2025) 3, 034024 we identified a viable scenario in which low-energy pion data favour the weak-coupling limit of the quantum spectral curve of a four-segmented string in $AdS_3$. We now show that this viable scenario maps uniquely onto the same soft-wall $AdS_5$ spacetime that underlies transverse dynamics in light-front holography. The essential difference is that the longitudinal mapping is restricted to the near-boundary region of $AdS_5$, whereas the transverse mapping extends into the bulk geometry. This is a promising step in formulating a 3-dimensional holographic picture of the pion.

        Speaker: Prof. Ruben Sandapen (Acadia University)
      • 32
        Tropical Geometry and Quantum Field Theory

        Over the past few years, new and exciting connections between tropical geometry and quantum field theory have been unearthed. In this talk I will review these developments with emphasis on recent ones (e.g. Feynman integrals, scattering amplitudes, and correlation functions).

        Speaker: Prof. Freddy Cachazo
      • 33
        Understanding RG flows in low dimensional QFT

        The behaviour of quantum systems dramatically depends on the energy scale at which they are probed. In Quantum Field Theories this phenomenon is described as a Renormalization Group flow between the UV (low distance) behaviour and the IR (long distances).
        Given an quantum field theory in 4 spacetime dimensions, it is often a very hard task the one of identifying what is the low energy dynamics of a given microscopic theory.
        With the aim of gaining insights on this problem, in this talk I will present remarkable results that may be obtained by reducing the number of spacetime dimensions to 2. In the case of RG flows triggered by relevant deformation of 2d CFT, constraints from generalized (non-invertible) symmetries as well as from non-local conserved charges are often enough to constraints completely the IR phase. Non invertible symmetries are a generalization of the notion of standard symmetries. They are generated by the set of topological line defects of the theory, and do not form a group, rather a fusion category. A further generalization of topological symmetry may be obtained by considering defects that only commute with the Hamiltonian of the QFT, but not with its Stress Energy tensor, failing to be fully topological, but only translational invariance.
        Combining these two powerful tools, we will illustrate how to constraint the RG flows.

        Speaker: Federico Ambrosino (Perimeter Institute)
    • (PPD) M1-7 | (PPD)
      • 34
        Status and perspectives of DEAP-3600 experiment

        DEAP‑3600, with its 3.3‑tonne liquid argon target, is a dark matter direct‑detection experiment located at SNOLAB in Sudbury, Canada. Detector upgrades have been ongoing since the end of the second fill run in 2020 to reduce backgrounds from shadowed alphas and dust dissolved in the liquid in the just‑started third fill run. We present here the most recent results from the WIMP search, together with the first results from the analysis of the third‑fill data, which will inform both the impact of the detector upgrades on our backgrounds and the design of future noble‑liquid experiments.

        Speaker: Michela Lai (Queens University)
      • 35
        Solar Neutrino Absorption on Argon-40 in DEAP-3600

        DEAP-3600 (Dark matter Experiment using Argon Pulseshape discrimination) is currently the largest single-phase argon dark matter detector in the world, housing over three tonnes of liquid argon. This makes it an excellent medium for searching for solar neutrino absorption on argon-40, a process first theoretically predicted in the 1980s. This process can be used for sensitive measurement of neutrino spectra and rates. In this talk, we detail the efforts undertaken by the DEAP collaboration to make the first observation of this interaction in data collected between 2016 and 2020, and present the latest results from this analysis.

        Speaker: Michael Perry
      • 36
        What Neutrinos in Dark Matter Experiments, like DarkSide-20k, Can Tell Us

        The Global Argon Dark Matter Collaboration (GADMC), formed to unite liquid argon–based dark matter experiments, is currently constructing its next-generation experiment DarkSide-20k. Located at the Laboratori Nazionali del Gran Sasso (LNGS), DarkSide-20k builds upon the success of previous argon experiments, including its predecessor, DarkSide-50, to continue the search for weakly interacting massive particles (WIMPs). The experiment introduces several new technologies and design improvements to reduce backgrounds and enhance sensitivity.
        With its large target mass and advanced instrumentation, DarkSide-20k is expected to probe WIMP–nucleon cross sections below 1×10^(-42) " " 〖"cm" 〗^2 for WIMP masses above 800 MeV. Central to this sensitivity is the use of a dual-phase time projection chamber (TPC), which enables precise reconstruction of particle interactions through the detection of both primary scintillation light and an amplified secondary electroluminescence signal. While this amplification improves sensitivity to low-energy nuclear recoils, it also renders DarkSide-20k sensitive to coherent elastic neutrino nucleus scattering (CEνNS). This sensitivity opens a complementary physics program beyond dark matter searches, including the potential detection of neutrinos from core-collapse supernovae within our galaxy. As such, DarkSide-20k serves not only as a powerful probe of dark matter but also as a novel observatory for low-energy neutrino physics.

        Speaker: Ryan Curtis
      • 37
        Nuclear Physics in Liquid Argon with DEAP-3600

        With the technical complexity required by ongoing dark matter direct detection experiments, as well as requiring more refined background rejection techniques, some direct detection experiments have the ability to investigate neutrinos as well. One such detector in recent years involves the liquid argon based DEAP-3600 experiment. The detector assembly allows for nearly 3600 kg of liquid argon scintillator to be used for direct detection experiments in the low-background environment 2 km underground at SNOLAB in Sudbury, Canada.

        This talk will go over a brief history of the DEAP experiment, discuss basic technical details on the current experimental design, and the data analysis techniques being used for ongoing search efforts utilizing the large detector mass (3.3 tonnes) of DEAP. This will inform a discussion on the most recent results regarding precision measurements of the radioactive properties of $^{39}Ar$ – a measured specific activity of (0.964 ± $0.001_{stat}$ ± $0.024_{sys}$) $\frac{Bq}{kg·atmAr}$ and a half-life of (302 ± $8_{stat}$ ± $6_{sys}$) years. Additionally, the scintillation quenching factor of α-particles in liquid argon will be covered, namely the extrapolation of the quenching factor down to the low energy region of 10 keV. This will be followed by a discussion of the ongoing efforts towards the neutrinoless double electron capture search via $^{36}Ar$ using the DEAP experiment and how these efforts allow for the investigation into the nature of neutrinos.

        Speaker: Peter Taylor (Queens University)
      • 38
        Towards a Low-Mass WIMP Search with the Scintillating Bubble Chamber

        The Scintillating Bubble Chamber (SBC) collaboration is combining the well-established technologies of bubble chambers with a liquid-noble scintillating target to develop a detector sensitive to sub-keV nuclear recoils with the goal of a GeV-scale WIMP dark matter search. Bubble chambers provide excellent electron recoil suppression, down to sub-keV thresholds as proven by a prototype xenon bubble chamber (XeBC), while scintillation signals from the liquid noble target facilitate event-by-event energy reconstruction. Two detectors are currently under development: SBC-LAr10 and SBC-SNOLAB. SBC-LAr10, operating in the MINOS tunnel at Fermilab, serves as a platform for detector commissioning, calibration, and engineering studies, with the objective of demonstrating stable operation at thresholds near 100 eV. SBC-SNOLAB is a radiopure replica of SBC-LAr10 designed for deployment at SNOLAB near Sudbury, Ontario, where it will operate in a deep underground, low-background environment. Both detectors employ a xenon-doped liquid argon target instrumented with scintillation, acoustic, and optical imaging readout systems. This talk will present the calibration program performed with SBC-LAr10 and its implications for SBC-SNOLAB, as well as recent progress toward a quasi-background-free WIMP search with SBC-SNOLAB.

        Speaker: Gary Sweeney (Queen's University)
      • 39
        Event selection and machine learning reconstruction for the ARGO dark-matter detector

        ARGO is a planned large liquid-argon detector for the direct detection of dark matter, using underground argon to reduce intrinsic radioactive backgrounds. A key analysis challenge is to efficiently select nuclear-recoil events from WIMP-like interactions. The analysis focuses on rejecting electronic recoils and surface-alpha backgrounds, while studying neutron-induced nuclear recoils.

        This work studies an event-selection strategy for ARGO using simulations. The selection is based on the total detected scintillation light as an energy estimator, pulse-shape discrimination between nuclear and electronic recoils, and three-dimensional position reconstruction to define a fiducial volume inside the detector.

        Event position reconstruction is challenging due to the large detector size and the sparse distribution of photon hits. To address this, a machine learning model is trained on simulated ARGO events using the photon charge and timing information. This model reconstructs event positions accurately despite the sparse input, improving the rejection of surface related and external backgrounds.

        This work is still in progress, with further developments in the machine-learning reconstruction and event-selection strategy expected to improve performance.

        Speaker: Tiana Fandresena Gérald Ramonjison (Simon Fraser University)
    • (DPMB) M1-8 | (DPMB)
      • 40
        Title to follow

        TBA

        Speaker: Avery Berman (Carleton Univ. Dept. of Physics, and Royal Ottawa Institute of Mental Health Research)
      • 41
        Hyperspectral Time-Resolved Near-Infrared Spectrometer for In-Vivo Adult Neuromonitoring

        Hyperspectral time-resolved (hTR) near-infrared spectroscopy (NIRS) is an advanced analytical technique that provides high-resolution spectral measurements across hundreds of wavelengths, offering improved depth sensitivity compared to conventional continuous-wave (CW) systems. In clinical settings, this technique could enable real-time identification of adverse changes in cerebral blood oxygenation and metabolism; however, designing a system for in-vivo monitoring is challenging due to trade-offs between spectral range, spectral resolution, and acquisition time. We have pioneered an approach that harnesses compressive sensing and highly parallelized time-correlated single-photon counting (TCSPC) to reduce the acquisition time for a 256-point TR spectrum to <6 seconds. Here, we assess the depth sensitivity of our device in phantoms and present in-vivo results tracking the cerebral hemodynamic response to a hypercapnia challenge in healthy adult humans.

        The hTR-NIRS device pairs an illumination architecture that implements the compressive sensing acquisition scheme and a SPAD array with 64 TCSPC-enabled pixels for detection. Two-layer tissue-mimicking phantom experiments were performed, where the top layer (10-mm-thick) remained static while the bottom layer’s blood oxygenation and metabolism were altered. hTR-NIRS probes were placed on the surface of the top layer, while CW-NIRS probes were placed on both the top and bottom layers for validation. A hypercapnia challenge (10 mmHg above normocapnia for 6 minutes) was performed with 5 healthy adults using a computer-controlled gas delivery circuit, with hTR and CW-NIRS probes fixed to the left side of each subject’s forehead. The phantom results demonstrate that hTR-NIRS reduces surface contamination by more than 50% compared to CW-NIRS and can successfully track changes in blood oxygenation and metabolism beneath a 10-mm-thick static top layer. The in-vivo results validate the capability of our system to track the dynamic cerebral hemodynamic response typical of hypercapnia. Future work will increase the sample size and monitor cerebral metabolism changes in-vivo during functional activation.

        Speaker: Natalie Li (Western University)
      • 42
        Development and Application of an MRI-Compatible Robotic Injection System for Precise 1.5T MRI-Guided Glioma Modeling and Cell Therapy

        Precise intracranial cell delivery is essential for establishing reproducible glioma models, yet conventional manual stereotaxis lacks operational stability and real-time imaging guidance within the Magnetic Resonance Imaging (MRI) environment. This project aims to implement a precise modeling workflow using a dedicated MRI-compatible robotic injection system under 1.5T MRI guidance. The hardware platform integrates a non-metallic robotic manipulator with a specialized injection device, optimized for high-resolution imaging within the confined bore(30cm). To ensure superior signal-to-noise ratio and animal stability during the procedure, a dedicated small-animal wireless radio-frequency (RF) coil and a gas anesthesia system were incorporated. Under real-time 1.5T MRI monitoring, the system facilitated precise intracranial puncture and cell delivery. Preliminary work has completed three injection trials, with the final subject successfully demonstrating tumor induction as confirmed by follow-up imaging. Next steps include leveraging this robotic platform to explore and evaluate diverse cell-based therapeutic strategies, such as stem cell transplantation and targeted drug delivery, to further assess treatment efficacy in preclinical models.

        Speaker: Yuxuan Wang (THE University of Winnipeg)
      • 43
        Level-Set Driven 3D Reconstruction of Serial Marmoset Brain Sections

        Whole Slide Imaging (WSI) has become a crucial modality in modern histopathology, enabling the digital conversion of tissue on glass slides into high-resolution virtual slides. WSI supports the reconstruction of three-dimensional (3D) volumes from serial histological sections, providing new opportunities to study the microstructural organization of complex tissues. However, 3D reconstruction from serial WSI is challenged by structural artifacts introduced during tissue preparation and imaging. These inconsistencies can accumulate across sections, degrading volumetric integrity. The level-set method (LSM) is a numerical technique that embeds surface information into a signed distance function to capture complex geometric and topological changes. The proposed workflow mitigates residual distortions by integrating LSM into the non-linear stage of an established pipeline.

        This study utilized in-vivo MRI and histological data from a single adult female marmoset brain. A T1-weighted anatomical scan was acquired at 300 μm isotropic resolution. Following imaging, the brain was cryosectioned coronally at 20 μm thickness, and sections were co-stained with Hoechst 33342 and Myelin Basic Protein. All image registrations were performed using Advanced Normalization Tools (ANTs). The MRI volume was rigidly and affinely registered to the blockface volume to align with the histological cutting plane. Blockface images were aligned using a four-step coarse-to-fine framework consisting of pairwise, sequential, and high-pass filtered transformations to preserve global shape while improving local slice-to-slice alignment. Histological sections were reconstructed using the blockface volume as reference. A final deformable non-linear registration step corrected localized distortions using level-set maps generated from segmented white and gray matter.

        The coarse-to-fine reconstruction achieved an NMI of 0.4470 and SSIM of 0.8022. The reference reconstruction pipeline achieved an NMI of 0.4617 and SSIM of 0.8030. The proposed method achieved an NMI of 0.4946 and SSIM of 0.8297, demonstrating improved inter-modality alignment and structural consistency. This framework addresses spatial information loss introduced during histological processing by providing a flexible, structurally guided approach for 3D reconstruction.

        Speaker: Emily DeLeon (McMaster University)
      • 44
        Developing Intelligent Brain-Machine Interfaces: A Bio-camouflage for Neural Probes

        New developments in neuroscience have driven rapid advancements in implantable devices at the brain-machine interface for both research and clinical applications. One such device is the Neuropixels probe, used currently in research and pre-clinical testing, which collects high-density, time-resolved neural impedance recordings across multiple brain regions. However, these implants are limited by biocompatibility mismatches at the machine-tissue interface. The long-term implantation of these devices elicits the host's immune response to reject the foreign object, leading to glial scarring and signal degradation. We investigate a highly tunable biomimetic interface approach through layer-by-layer (LBL) charged polymer method of assembly. In collaboration with the Montreal Neurological Institute, we are developing a naturally derived polycation, known as DPGA which promotes favorable neuron–surface interactions by bio-camouflaging the implant surface. Due to the optical transparency of the molecular layers, a key challenge is the direct characterization of the coating on the probe. To address this, we are developing a surrogate technique, based on biocompatible dyes, to characterize the presence of the bio-camouflage while retaining the electrical integrity Neuropixels probe. Optical spectroscopy and ellipsometry are used to assess film thickness, packing density, and uniformity. Dye-uptake measurements show increase in uptake capacity with 8 bilayer films showing approximately sevenfold higher maximum absorbance than single bilayer films; And a reduction in dye uptake kinetics were increasing the number of bilayers from 1 to 8 reduces the effective uptake rate constant (k)by two orders of magnitude, from 〖~10〗^(-2 ) s^(-1 ) to 〖~10〗^(-4) s^(-1). In vitro cell studies are also discussed to evaluate biological responses to the bio-camouflage. Results suggest an optimal threshold of approximately four bilayers under physiological conditions. Beyond this threshold, increased coating thickness is associated with reduced cell adhesion. Finally, impedance and SNR measurements are being investigated to test bio-camouflaging effects on the electrical performance of the probe.

        Speaker: Mahta Morad
      • 45
        Estimation of the physical characteristics of an external hardware attenuator in pure PET systems in sinogram space

        Purpose: Nuclear medicine is a functional imaging technique. It uses a radiotracer
        injected into an organism to map its displacement, through the emission and the
        annihilation of a positron. It doesn’t give information on anatomical features. When an
        external hardware is used, such as a coil in a PET/MRI system, attenuation will impact the
        resulting image. A classical solution in the context of PET/MRI is to bring the external
        hardware to a CT system, to obtain an attenuation map: the main limitations of this
        approach are the requirement of that second imaging modality, as well as the inherent
        extrapolation from the CT energies (70-140 keV) to the PET energies (511 keV). The
        approach presented here relies entirely on a PET acquisition, circumventing the need for
        another imaging technique as well as the extrapolation.
        Method: Using a germanium MR-compatible phantom, two PET acquisitions are obtained,
        one with a cylindrical aqueous external attenuator and one without it. By comparing their
        sinogram, it is possible to extract a track or an eclipse caused by the external hardware,
        corresponding to the lines of incidence affected by the hardware. This approach is
        methodologically simple and easily reproducible.
        Results: Looking only at the ratio of the sinograms, it was possible to determine accurately
        the distance of the external hardware and of its radius. The results were compared with
        physical measurements and MRI acquisitions done at the same time as the PET
        acquisition. All the measured and determined values are in statistical agreement.
        Future Work: Future works include an analysis of more complex structures, especially
        those with many attenuators. Other enquiries of interest include the exact position in
        Cartesian space, as well as the use of non-convex shapes.

        Speaker: Philippe Laporte
    • (DAPI) M1-9 | (DPAI)
    • 12:00
      Break for Lunch (12h00-13h30) | Pause pour dîner (12h00-13h30)
    • Special Session - Nuclear Threat Reduction
    • M-MEDAL1 Medalist Talk | Conférence du lauréat de la médaille
    • 14:00
      Travel Time | Déplacement
    • (DAPI) M2-1 | (DPAI)
      • 46
        Introduction to and Canadian participation in IEC/ISO Joint Technical Committee 3 – Quantum Technologies

        Both measurement standards and documentary standards, are key enablers of the commercialization of emerging technologies. As quantum continues its accelerating delivery of real-world solutions, it is a key time for the development of standards to validate these technologies and to set performance and interoperability criteria.

        In recognition of the importance and timeliness of quantum technologies, IEC and ISO launched a new joint technical committee on the subject in January of 2024. Over the last two years JTC 3 has formed a number of work groups to cover the application areas of quantum computers, sensors, communications as well as terminology, random number generation and enabling technologies.

        This presentation will provide an overview of the emerging structure of JTC 3 and the standardization projects which have been recently launched. It will also provide a summary of Canada’s participation to-date in the joint technical committee, and areas where more Canadian expertise would be beneficial. Through this presentation we hope to raise awareness and interest in this important standardization work and to increase membership and participation of Canadian technical experts in IEC/ISO JTC 3 Quantum Technologies through the Canadian Mirror Committee.

        Speaker: Kevin Thomson (National Research Council Canada)
      • 47
        The Metre Convention – 150 years (and counting) of confidence in measurement

        It is impossible to find a physics specialty that does not rely on accurate measurements, and confidence in any single measurement is based on the International System of units, the SI, which defines all the fundamental and derived measurement quantities required for science and engineering in the 21st Century. In parallel, there is a legal framework for international metrology, the Metre Convention, which lays out how measurements at different times and in different locations can be appropriately linked. 2025 marked the 150th anniversary of the signing of the Metre Convention, the creation of the Bureau International des Poids et Mesures (BIPM) and the beginning of the modern metrology era. Canada became a member state of the convention on June 15th 1907 and the National Research Council in Ottawa is responsible for ensuring that Canada continues to be aligned, scientifically and legally, with the Metre Convention and the International System of units.

        When one considers measurements in any specialty, it is easy to focus only of one or two key units, whether it is kilogram and second for particle physics, or the coulomb and kelvin for condensed matter studies. But taking into account everything that goes into an experiment: from sample preparation to the dependencies of the measurement method, it becomes apparent that the complete SI (base and derived units) is likely represented:
        second – controlling experimental acquisition times, determining particle velocities;
        metre – positioning of equipment, characterizing optical path lengths;
        ampere – power requirements, HV supplies, measurement of single electrons in quantum devices;
        kelvin – environmental considerations, cryogenics, thermodynamics;
        kilogram – weight limits for laboratory design, sample preparation;
        mole – chemical analysis (even physicists have to do a little chemistry!);
        candela – luminance characteristics of imaging displays and light sources.

        This presentation will explore the history of international metrology and how it impacts measurements today. It will describe the activities of the Metrology Research Centre at the NRC – Canada’s National Metrology Institute - and how it is adapting to new challenges facing science, technology, and society in Canada as we move into the next 150 years of the Metre Convention.

        Speaker: Malcolm McEwen (National Research Council Canada)
      • 48
        Optical imaging and sensing for automotive safety and autonomy

        This presentation will review current and emerging sensing modalities for automotive safety and autonomy. Two main silicon-based sensing technologies will be described: (1) High-dynamic-range (HDR, up to 140 dB) CMOS image sensors (CIS) and (2) Single-photon avalanche diode (SPAD) arrays with >30 % photon detection efficiency (PDE).
        Firstly, HDR CIS concepts will be reviewed including control of direct sunlight and bright headlamps. Light flicker mitigation (LFM) and motion artifact compensation requirements will be contrasted with typical HDR CIS design tradeoffs related to secondary sub-pixels (~500 nm pitch) and/or short exposures (<0.1 ms).
        Secondly, SPAD arrays for use in direct time-of-flight (dToF) light detection and ranging (LiDAR) will be described. Both short-range (< 50 m) flash LiDAR and long-range (100 m to 300 m) scanning LiDAR approaches will be discussed.
        Finally, novel SPAD-based HDR stop-motion sensors with capabilities for both imaging and depth sensing will be presented.

        Speaker: Dr James Mihaychuk (Sony Semiconductor Solutions)
      • 49
        Numerical Simulation of Wetland Groundwater Hydrodynamics Using a Three-Dimensional MODFLOW Model

        Wetlands are ecologically sensitive environments where groundwater plays a vital role in sustaining hydrological balance, water quality, and ecosystem functions. Characterizing groundwater flow in wetland aquifers is challenging due to heterogeneous lithology, fluctuating water levels, and variable hydraulic properties. This study investigates groundwater flow dynamics within a wetland system covering Aviara, Uzere, Ibedeni, and Aboh in Delta State, Nigeria using a three-dimensional numerical modeling approach. A conceptual hydrogeologic model was developed from lithologic logs, hydraulic parameters, and measured groundwater levels. The model domain was discretized into multiple layers to represent vertical and lateral variations in aquifer geometry, and boundary conditions were defined to simulate natural recharge and discharge processes. Steady-state groundwater flow simulations were conducted using the MODFLOW code to evaluate hydraulic head distribution and groundwater flow paths. Results show that groundwater predominantly flows from high-head recharge areas toward lower-head wetland zones, particularly toward Wells 5–9, 1–4, and 12. Simulated groundwater flow rates range from −0.0064 m/day to 0.00692 m/day. Regional flow directions are mainly from Aviara toward Idheze and from Uzere toward Aboh, while localized flow reversals occur between Idheze–Ivrogbo and Aboh–Ibedeni. These patterns highlight significant spatial variability in groundwater movement and enable effective delineation of recharge and discharge zones within the wetland aquifer system. Overall, the study demonstrates the applicability of three-dimensional MODFLOW modeling for characterizing complex wetland hydrogeology and provides a scientific framework to support sustainable groundwater and wetland resource management.

        Speaker: Prof. Merrious Oviri Ofomola (DELTA STATE UNIVERSITY ABRAKA, NIGERIA)
    • (PPD) M2-10 | (PPD)
      • 50
        Standard Model, Higgs and di-Higgs precision physics with ATLAS at the LHC

        Precision measurements of W, Z and Higgs boson production are presented. Such measurements are powerful probes of the electroweak sector of the Standard Model as well as proton structure. First, the new results from ATLAS that include the world first measurements of the complete set of angular coefficients and full phase-space differential cross-sections of W boson production is summarized. The measurements use low pileup data which allow an optimised reconstruction of the W boson transverse momentum. Then, this talk presents precise measurements of Higgs boson production and decay rates, obtained using the full Run 2 and partial Run 3 pp collision dataset collected by the ATLAS experiment at 13 TeV and 13.6 TeV. These include total and fiducial cross-sections for the main Higgs boson processes as well as branching ratios into final states with bosons and fermions. Differential cross-sections in a variety of observables are also reported, as well as a fine-grained description of the Higgs boson production kinematics within the Simplified Template Cross-section (STXS) framework.

        Speaker: Matthew Basso (TRIUMF (CA))
      • 51
        What can collider experiments tell us about axions in multi-Higgs models

        The QCD axion has long been considered to provide a natural solution to the strong CP problem. One of the two main classes of models that give rise to an axion is the Dine-Fischler-Srednicki-Zhitnitsky (DFSZ) model, which contains two Higgs doublets and a singlet complex scalar field. Generally, the LHC does not conduct searches for the QCD axion, whereas experiments at the LHC are very important for two-Higgs doublet model searches. We explore the collider phenomenology of multi-Higgs models with an axion.

        Speaker: Srobona Basak (Carleton University)
      • 52
        A dark matter search with the ATLAS detector for an emerging jet signature

        The nature of dark matter and its properties remains one of the most important open questions in modern physics. Under a theorized analogue of Standard Model QCD, high energy proton-proton collisions at the LHC could produce a unique signature consisting of two beams or ‘jets’ of Standard Model particles, and two composed of dark matter. The dark matter particles would then decay back into SM particles over macroscopic distances within the ATLAS detector. These dark matter collimated beams are referred to as emerging jets as the visible SM components would ‘emerge’ throughout the detector components. The detection of emerging jets would provide evidence of particle dark matter created at the LHC and open up new strategies to investigate this phenomenon.

        Using √s = 13 TeV proton-proton collision ATLAS data recorded from 2015 to 2018, the results of a search for emerging jets will be presented. The theoretical motivations, analysis strategies, and search results will be discussed. Although the presence of emerging jets was not detected, new limits on the possible parameter space were set, excluding many models.

        Speaker: Jérémie LePage-Bourbonnais (Carleton University)
      • 53
        Non-thermal Leptogenesis and Cosmic Inflation

        There are more particles than antiparticles in our universe. A simple solution to this unexpected asymmetry, known as the baryon asymmetry of our universe (BAU), is leptogenesis, where a heavy right-handed neutrino (RHN) is introduced. Interestingly, the mass of the RHN coincides with the energy scales during inflation. In addition to this outstanding problem, dark matter continues to be elusive in its nature, with the only certainty being that it interacts gravitationally. This provides strong motivation to investigate particle production that occurs through gravitational interactions, commonly referred to as cosmological gravitational particle production (CGPP). During inflation, spacetime expands non-adiabatically, necessarily leading to this particle production mechanism. A simple model that connects the Standard Model (SM) to inflation is one that takes the inflaton to be the SM Higgs field, though it does not require it being the SM Higgs. We investigate the dynamics of a spectator scalar field and we find the the resultant particle spectrum. We find a significant increase in particle production compared to other forms of inflation such as quadratic inflation. The spectrum exhibits new features, including the occupation number $n_k$ scaling like $\xi$, the non-minimal coupling to gravity, and which is peaked on a characteristic wavenumber that also scales with $\xi$. This large particle production enhancement and new scaling with $\xi$ allows us to investigate the parameter space for leptogenesis and DM. In the case of leptogenesis, the spectator field is the heavy RHN that decays asymmetrically into leptons and through non-perturbative effects, produces the BAU. Alternatively, the decay products of this spectator scalar field can comprise the DM of our universe.

        Speaker: Tammi Chowdhury (University of Manitoba)
      • 54
        Background Estimation for Precision Electroweak Measurements with ATLAS Detector

        Electroweak production of a Z boson in association with two jets (EW Zjj) provides a clean way of probing vector boson fusion (VBF) and is a crucial test of the electroweak couplings within the Standard Model at high energies. Using the ATLAS detector at the Large Hadron Collider, this analysis measures differential cross sections in the dilepton channel, focusing on the characteristic VBF topology with two forward jets of high transverse momentum, large invariant mass ($m_{jj}$), and large rapidity separation ($\Delta y_{jj}$). The measurement uses the full Run 2 dataset at $\sqrt{s} = 13$ TeV (139 fb$^{-1}$) and most of the available Run 3 dataset at $\sqrt{s} = 13.6$ TeV (164 fb$^{-1}$), enabling independent measurements with increased sensitivity in the high-$m_{jj}$ and high-$\Delta y_{jj}$ regions where the EW contributions dominate. A central challenge of the EW Zjj measurement at a hadron collider is the suppression of the dominant QCD-induced Z+jets background, which closely mimics the signal topology but lacks the characteristic colour-singlet exchange. Previous measurements in this channel have identified reliance on QCD simulation as a primary bottleneck in extracting precise EW yields from data. This work presents a background-estimation strategy using an extended maximum likelihood fit that is designed to have reduced sensitivity to the choice of prior QCD modelling. The fit extracts the primary background normalization and data-driven fractional background scaling factors from two control regions with additional jet activity. The background yields in the signal region and in a third control region, defined by the absence of additional jet activity, are simultaneously corrected using the extracted scaling factors. Our preliminary studies indicate that this approach enables electroweak measurements with reduced systematic uncertainties. The proposed strategy has been rigorously stress-tested and shown to be minimally biased and stable under variations in the prior modelling of both signal and background from simulation. The resulting measurements using ATLAS data are expected to provide a valuable benchmark for interpretations within the Effective Field Theory (EFT) framework.

        Speaker: Ishan Vyas (Carleton University (CA))
    • (DAMOPC) M2-11 | (DPAMPC)
      • 55
        Progress Toward Canada’s First Portable Quantum Gravimeter

        Absolute gravimeters based on falling corner-cube reflectors have been a cornerstone of geophysical research and exploration for decades. On the other hand, quantum gravimeters use matter-wave interferometry with a free-falling cloud of laser-cooled atoms to measure gravity down to the $10^{-9}$ g level. These instruments have demonstrated excellent sensitivity and stability over several months, making them ideal for detecting low-frequency gravity signals. Commercial quantum gravimeters are now available on the market, but there has been surprisingly little work to develop them in Canada. In this talk, I will discuss progress toward building Canada’s first portable quantum gravimeter at the University of New Brunswick. I will also provide an overview of its functionality and its key advantages over traditional technologies for applications in geophysics and geodesy.

        Speaker: Prof. Brynle Barrett (University of New Brunswick)
      • 56
        Trapping Barium ions in a surface ion trap for quantum information processing

        Trapped ions are among the most precise hardware platforms for quantum information processing (QIP). Among various ion species, Barium ions are especially promising for scaling up quantum processors. They can be manipulated using visible light, for which advanced photonic devices such as fiber-based modulators are available, unlike the ultraviolet light required for most other ion species. Further, the availability of long-lived metastable states allows for encoding more than two-level systems, or qudits, in a single ion – expanding the controllable Hilbert space. Here, we present our progress towards developing a Barium ion quantum processor. We have successfully trapped 138Ba+ ions in a microfabricated surface trap – a major technical milestone. We use laser ablation to generate a plume of Barium atoms from a metallic target, then apply a two-step photoionization process to ionize and trap ions in an isotope-selective way. We outline challenges that we had to overcome to trap ions, such as the low trapping depth of the surface trap, high plume speed caused by laser ablation, and the existence of coherent dark states. These challenges require optimization over a large parameter space, including the positions, frequencies, and powers of several laser beams, and the voltages of trapping electrodes. We discuss our strategy for drastically reducing the parameter space through careful optical engineering, monitoring and remote control of various system parameters, and systematically searching the parameter space. The Barium ion system forms the basis of a versatile testbed for quantum simulation, as well as employing other quantum algorithms with both qubits and qudits.

        Speaker: Hawking Tan (IQC)
      • 57
        Nonlinear Optical Properties of Photoresponsive Self-Assembled Monolayers

        The field of smart materials, whose properties can be modified in response to an external stimulus, is currently experiencing rapid growth. Among the various stimuli available, light is particularly attractive for controlling materials, as it is readily accessible, biocompatible, non-invasive, and, most importantly, allows excitation with high spatial and temporal resolution. In this context, organic photochromic compounds have been extensively studied and incorporated into materials for a wide range of applications. Photoswitchable compounds are organic molecules capable of reversibly switching from a stable form to a metastable form upon light irradiation. These two photochromic states exhibit different absorption spectra and often display distinct geometries, solubilities, and physicochemical properties. It is precisely this difference in properties between the two states that is exploited in photoactive materials for applications such as chemical and biological sensing, targeted delivery of bioactive molecules, photosensitive catalysis, energy storage, and information storage. In this work, the project aims to design photoresponsive self-assembled monolayers (SAMs) functionalized with nonlinear optical (NLO) photoswitches. The structure–property relationships governing the linear and nonlinear optical responses in solution were investigated in a range of azobenzene (AZO) and Donor–Acceptor Stenhouse Adduct (DASA) derivatives. These photoswitches, bearing an alkyne end group, were then covalently grafted onto an azide-terminated monolayer platform to afford photoresponsive monolayers. The resulting materials were characterized using classical surface analysis techniques such as contact angle goniometry, PM-IRRAS, and ToF-SIMS, as well as advanced optical characterization methods. The unique combination of Visible Reflection–Absorption Spectroscopy and Second Harmonic Generation spectroscopy not only allows the assessment of nonlinear optical responses, but also provides unique insight into intrinsic monolayer properties, including surface density and molecular orientation distribution.

        Speaker: Dr Chloé Courdurié (University of Bordeaux)
      • 58
        Photoelectron holography of a heteronuclear molecule

        We report on strong field photoelectron holography in heteronuclear hydrogen chloride (HCl), using coincidence resolved momentum imaging to access channel and orientation dependent electron dynamics. By detecting the photoelectron in coincidence with the ionic fragments, we unambiguously separate dissociative ionization pathways and reconstruct the molecular frame without requiring laser induced alignment or orientation.

        The measured photoelectron momentum distributions exhibit clear holographic interference structures that arise from the coherent superposition of direct and rescattered electron trajectories. Unlike homonuclear molecules, HCl provides an intrinsic molecular asymmetry that enables direct access to orientation dependent strong field dynamics. We show that this asymmetry manifests itself not through a breakdown of holographic symmetry, but through orientation selective population of ionization channels and corresponding phase shifts in the holographic fringes.

        To support the experimental observations, we compare the data to simulations based on the Coulomb quantum orbit strong field approximation. The calculations reproduce the main holographic features and confirm that the observed patterns originate from long trajectory interference, with the heteronuclear character entering primarily through the molecular potential and orientation dependent ionization amplitude. This benchmarking establishes channel resolved holography as a sensitive probe of molecular structure and dynamics beyond symmetric systems.

        Our results demonstrate that strong field photoelectron holography can be extended to heteronuclear molecules in a fully coincidence resolved framework, providing access to orientation dependent electron scattering without field induced symmetry breaking. This approach opens new opportunities for imaging ultrafast coupled electron nuclear dynamics in polar molecules and more complex systems, and establishes a robust pathway toward quantitative molecular frame holography in strong laser fields.

        Speaker: Dr Andre Staudte (Joint Laboratory for Attosecond Science of the National Research Council and the University of Ottawa)
      • 59
        Control of molecular rotation in helium nanodroplets with an optical centrifuge

        We experimentally demonstrate that the rotation of molecules embedded in helium nanodroplets can be controlled with an optical centrifuge, allowing for the study of molecular dynamics inside the strongly interacting many-body environment of superfluid helium at variable levels of rotational excitation. We show that forced in-field rotation of molecules is possible over a continuous range of frequencies, and that with resonant excitation the field-free excited state rotational dynamics can be observed. This allows for time-domain measurements of state lifetimes, and allows us to study the interaction of superfluids with defects at the atomic level.

        Speaker: Dr Ian MacPhail-Bartley (University of British Columbia)
    • (DCMMP) M2-12 | (DPMCM)
    • (DTP) M2-2 | (DPT)
      • 60
        Results at strong coupling via finite path integral limits

        Solving quantum field theories at strong coupling remains a challenging task. The main issue is that the usual perturbative series are asymptotic series which can be useful at weak coupling but break down at strong coupling. However, we show that if the limits of integration in the path integral are finite, the perturbative series is an absolutely convergent series which works well at strong coupling. We explain how this avoids Dyson's famous argument on convergence. For now, we apply this perturbative approach to $λϕ^4$ theory in 0 + 0 dimensions (a basic integral) and 0+1 dimensions (quantum anharmonic oscillator). We begin by showing that finite integral limits yield a convergent series in agreement with exact analytical results for a basic integral. We then consider the energy of the anharmonic oscillator. In quantum mechanics, if one is interested in the energy, it is often easier to use Schrödinger’s equation to develop a perturbative series than path integrals. Finite path integral limits are then equivalent to placing infinite walls at positions −L and L in the potential where L is positive, finite and can be arbitrarily large. With walls, the series expansion for the energy is convergent and approaches the energy of the anharmonic oscillator as the walls are moved further apart. We obtain the ground state energy at strong coupling and the result agrees with the exact energy (obtained numerically) to within 0.1%, a remarkable result in light of the fact that at strong coupling the usual perturbative series diverges badly right from the start.

        Speaker: Prof. Ariel Edery
      • 61
        Anomaly mediated supersymmetry breaking and Seiberg-Witten theory

        Recently, various authors have studied the vacua of strongly coupled field theories via the introduction of anomaly mediated supersymmetry breaking (AMSB) to supersymmetric (SUSY) gauge theories. SUSY gauge theories are more tractable than their non-SUSY counterparts, and AMSB allows one to retain a level of analytic control over the theory even as SUSY is broken. It is thus an alluring tool to study non-SUSY vacua. However, it is known that in at least some cases, there is a phase transition as the SUSY-breaking scale crosses the confinement scale. It is therefore unclear to what extent these types of calculations accurately portray realistic non-SUSY physics.

        We introduce AMSB to $\mathcal{N}=2$ SQCD with massless squarks. In the UV, we find that the resulting theories retain $\mathcal{N}=1$ SUSY for all $SU(N)$ gauge groups. This provides an opportunity to study the robustness of AMSB itself. We have a much better understanding of $\mathcal{N}=1$ SQCD than we do its non-SUSY counterpart, allowing us to compare the vacua associated to different scales of SUSY-breaking.

        Specializing to $SU(2)$, we calculate the SUSY breaking effects in the IR. The surviving vacua exhibit monopole condensation and confinement as in the famed 1994 Seiberg-Witten papers. This result provides tentative support to the validity of a recent attempt by Murayama to derive the vacuum structure of massless QCD using AMSB.

        Our calculations in some sense determine the leading low-energy running of the fundamental gauge coupling, deep in the nonperturbative regime. We heuristically argue that this running can be interpreted as a statement about Wilson loops - namely, that they follow an area law.

        Speaker: Cyrus Robertson Orkish
      • 62
        Quark clustering in heavy pentaquarks: insights from QCD sum rules

        Hadrons are composite states of quarks and gluons bound by the strong interaction. Conventional hadrons include baryons, composed of three quarks, and mesons, composed of a quark–antiquark pair. Non-conventional hadrons, known as exotics, have more complex quark content. A prominent example is the pentaquark, consisting of four quarks and one antiquark. In recent years, the LHCb experiment has discovered several pentaquarks containing charm and anti-charm quarks, posing new challenges for our understanding of how quarks organize themselves inside hadrons.

        Determining the internal structure of pentaquarks remains an open problem. One class of models describes pentaquarks as hadronic molecules—weakly bound, color-neutral baryon–meson systems. An alternative is the compact model, in which quarks form tightly bound colored clusters that act as effective building blocks. In this picture, a pentaquark consists of a diquark (two quarks) and a triquark (two quarks and an antiquark). Because both clusters carry color, the compact model predicts stronger binding, analogous to the quark–antiquark interaction in conventional mesons. This analogy has motivated the extension of successful meson potential models to the pentaquark sector.

        A key limitation of such potential models is that the constituent triquark mass enters as an external input and must be estimated phenomenologically. An independent determination of this quantity is therefore essential for assessing the reliability of compact pentaquark models.

        Quantum chromodynamics (QCD) sum rules provide a framework for relating QCD-level dynamics to hadronic properties and are well suited for this task. In this talk, I will present our work using QCD sum rules to determine the constituent mass of a triquark containing an anti-charm quark. We compare our results with values used in potential models and discuss the implications for the pentaquarks observed by LHCb. Finally, directions for future work will be outlined.

        Speaker: Robin Kleiv (Thompson Rivers University)
      • 63
        Higgs Inflation: Particle Factory

        Cosmic Inflation provides a window into the highest energy scales realized in the history of our universe. Higgs Inflation, wherein the Standard Model Higgs or a variant is identified as the inflaton, provides a minimal framework for incorporating cosmic inflation into the Standard Model. In this talk I will study particle production in Higgs Inflation, and present new idiosyncratic aspects which distinguish it from other inflation models, with implications for the production of dark matter and the baryon asymmetry of the universe.

        Speaker: Evan McDonough (University of Winnipeg)
      • 64
        Generalized uncertainty relations in non-relativistic quantum field theories

        Generalized uncertainty relations are the epitome of bridging quantum theory and the general theory of relativity, which, in principle should lead to the elusive consistent theory of quantum gravity. We show that such generalized uncertainty relations also emerge from the functional integral formalism in the context of non-relativistic quantum field theories. Derivation of the Generalized Uncertainty Principle (GUP) within the framework of non-relativistic quantum mechanics serves as a guiding example. This idea is expanded to a non-relativistic quantum field theory. It can be used to provide phenomenological predictions for quantum gravity tests in condensed matter systems. As an example, Bose-Einstein condensates are considered.

        Speaker: Dr Mitja Fridman (Czech Technical University in Prague)
    • (DAMOPC) M2-3 | (DPAMPC)
      • 65
        Improved detection scheme for a cold-atom gravimeter

        High-sensitivity gravity measurements are essential for environmental monitoring, natural resource exploration, and national security. Quantum gravimeters utilizing cold-atom interferometers have demonstrated high levels of accuracy, sensitivity, and long-term stability that outperforms traditional gravimeters based on falling corner cubes. We present improvements to the detection system of our table-top quantum gravimeter by using a spatially resolved state detection scheme. Two vertically separated light sheets in the horizontal plane create independent detection zones for the F=1 and F=2 ground states in Rb. Cold atoms traversing the light sheets fluoresce with an intensity proportional to the population in each state. Large diameter optics are used to increase the number of collected photons and therefore the signal-to-noise ratio of the quantum gravimeter. This detection system is also located ~20 cm below the release point of the cloud, which extends the total interrogation time up to 2T = 200 ms. Since the gravimeter sensitivity scales as T, these improvements will position our instrument near the state-of-the-art and allow it to serve as a high-accuracy gravity reference for other gravimeters.

        Speaker: Timothy Hunt (University of New Brunswick)
      • 66
        Towards a Bose Einstein condensate of Ag

        The matter-antimatter asymmetry is a long standing problem in physics. Sakharov showed that one requirement to explain this asymmetry is CP violation. To date there has not been sufficient CP violation observed in the Standard Model to fulfill this condition. We propose using novel FrAg molecules which are highly sensitive to hadronic CP violating physics to look for the Fr nuclear Schiff moment. These molecules can be formed by combining laser-cooled atoms using standard techniques done for ultracold bi-alkali molecules. Here we report on progress towards assembling these molecules by understanding the scattering properties of Ag. We report on observed lambda-enhanced gray molasses cooling, trapping in an optical dipole trap and Feshbach resonances. We also report on progress towards a Ag BEC.

        Speaker: Addison Okell (University of Waterloo)
      • 67
        Analogue Black Holes in BECs

        Hawking radiation is a quantum phenomenon arising from event horizons but is probably impossible to observe in its original astrophysical context. An alternative is to study an analogue system such as a Bose-Einstein condensate (BEC) where event horizons can occur in regions where the BEC flows faster than the speed of sound in the system, and phonons within the condensate play the role of photons. My research is about the theory of particle production in BEC black holes; in particular, bringing out universal features using tools such as catastrophe theory. In this presentation I will discuss the theoretical background of analogue black holes and will delve into concepts of catastrophe theory and how it can be used to study particle creation.

        Speaker: Nevan Keating (McMaster University)
      • 68
        Quantum Quenches in the two-component Bose-Hubbard model

        Cold atoms in optical lattices can be used as quantum simulators to study the temporal evolution of quantum systems, which has lead to increasing interest in the out-of-equilibrium dynamics of bosons in optical lattices. Adding a second species of bosons introduces a wide range of novel quantum phases and provides a platform to explore analogues of spin systems. We study the Bose-Hubbard model for two-component bosons using a strong-coupling approach within the closed-time-path formalism and develop a two-particle irreducible effective action of this problem. We obtain equations of motion for the mean field and full propagators and study these in the low-frequency, long wavelength limit.

        Speaker: Florian Baer (Simon Fraser University)
      • 69
        Emergent Spacetime Geometries in Bose–Einstein Condensates: From Spherical Refocusing to FLRW Analogues

        Atomic Bose-Einstein condensates (BECs) provide a versatile platform for simulating curved spacetimes via their emergent acoustic metrics. By tailoring the trapping potential, one can realize inhomogeneous density profiles that generate effective geometries conformal to a sphere or a hyperboloid. In the spherical case, phonons follow closed geodesics and refocus at antipodal points, echoing the perfect refocusing found in certain optical systems. We show exact agreement between affine geodesics and ray dynamics in the geometric-optics limit. Beyond this static setting, the framework naturally extends to time-dependent couplings, yielding effective line elements equivalent to Friedmann-Lemaître-Robertson-Walker (FLRW) cosmologies with tunable curvature. Using Gross-Pitaevskii simulations, we demonstrate the refocusing dynamics, test their robustness to perturbations, and compare with preliminary experimental data. This approach provides a concrete route to exploring analogue cosmology and wave dynamics in emergent spacetime backgrounds.

        Speaker: Jay Mehta (McMaster University)
      • 70
        Topological Control of Vortices via Dynamical Instabilities

        Nonequilibrium driving can generate strongly asymmetric responses even in systems without net bias or static symmetry breaking. We consider a framework for controlling vortices in a Bose–Einstein condensate confined to a ring trap through dynamical instabilities. When the system is periodically driven across an instability threshold, the resulting nonlinear response produces asymmetric growth and decay cycles that allow vortices to be deterministically switched on and off, even in symmetric settings. These dynamics provide a natural route to instability-enabled control in driven quantum fluids.

        Speaker: Denise Kamp
    • (DCMMP) M2-4 | (DPMCM)
    • (DQI) M2-5 | (DIQ)
      • 71
        On the complexity of shallow quantum circuits with and without noise

        What computational problems can be solved efficiently with quantum computers, but not with classical computers? One of the most well understood examples of a problem that exhibits such a quantum computational advantage is sampling from quantum circuits. Based on a growing body of theoretical evidence, the problem of sampling from deep, random quantum circuits is now believed to be classically intractable, yet implementable with today's 'NISQ' hardware. However, much less is known about sampling problems beyond the particular regime of deep random circuits. In this talk, I will explore the boundaries of computational advantage in circuit sampling in different regimes, focusing on shallow circuits and the effects of noise. I will describe a sharp transition that occurs as a function of the circuit depth, which separates a classically simulable phase from a putatively complex, non-simulable phase. I will present a precise, statistical description of the quantum states generated by these circuits in the shallow, non-simulable regime, which both accounts for the hardness of classical simulations, and also predicts a dramatic susceptibility to noise. Based on this structural characterisation of shallow circuit sampling, I will argue that even a very small noise rate - scaling with the system size n as log(n)/n - renders the output distribution classically simulable. This highlights the extreme sensitivity of circuit sampling problems to noise, and sheds light on the complexity of simulating quantum many-body dynamics more generally.

        Speaker: Max McGinley (Cambridge)
      • 72
        Bell State Analysis Provides an Optimal Basis Saturating the Quantum Cramer-Rao in Rotation Sensing

        The second-ordered anti-coherent state is known to achieve the ultimate limit for sensing rotation around an unknown axis, thereby saturating the quantum Cramér-Rao bound. It is convenient to map these states into an angular-momentum basis. Limited to measuring a small rotation angle, a corresponding set of bases has also been selected, postulated to provide the ultimate precision in rotation sensing. Let a second-ordered anti-coherent state serve as the initial state. An optimal basis measurement for rotation-angle extraction consists of projecting the resultant state onto the initial state, and onto the states corresponding to the angular momentum operators J_x, J_y, and J_z acting on the initial state. However, making these projection measurements is not easy, since the measurement bases correspond to highly entangled states. Therefore, no currently known strategies can perform the optimal basis measurement efficiently.

        To solve this problem, we showed that the probability given by making a measurement in the optimal basis can be written as a sum over pairwise Bell-basis measurements between different quanta. Furthermore, due to the symmetry of the second-order anti-coherent state, a fully optical procedure for Bell-state analysis is possible. Since the second-order anti-coherent states contain only symmetric Bell states, and a unitary operation describing the rotation cannot change the symmetry of the state, only the symmetric Bell triplet state will be presented in the decomposition of the final state after rotation. Thus, with additional single-qubit operations, using only linear-optical components and single-photon detectors is sufficient to achieve near-unity readout as long as we assign a path degree of freedom to each photon. We theoretically showed that our scheme is achievable with four-photon tetrahedron states and six-photon balanced NOON states.

        Speaker: Zhuoran Bao
      • 73
        Adaptively Reconstructing a Quantum State Using Artificial Neural Networks

        Quantum state tomography (QST) aims to reconstruct an unknown quantum state from measurement statistics and is a central tool for characterization and validation in quantum information and quantum computing. A key practical limitation in QST is sample cost: reconstructing a quantum state requires data from an informationally complete measurement set, and the number of distinct measurement settings scales exponentially with the system size. Adaptive QST mitigates this burden by using interim data to choose subsequent measurement settings, allocating samples to the most informative settings and reducing the number of settings that must be sampled exhaustively.
        We investigate an adaptive QST strategy that uses Restricted Boltzmann Machines (RBMs), energy-based generative neural networks trained on measurement results, to represent quantum states. A committee of independently trained RBMs guides measurement selection by ranking candidate measurement settings by prioritizing measurements that are expected to reduce the model uncertainty most. To quantify the benefit of adaptivity under controlled conditions, we implement a candidate pool of Pauli measurement settings and simulate measurements outcomes in Qiskit (an open-source quantum computing software development kit). Target states are drawn from random ensembles spanning both pure and mixed states for one- to five-qubit systems.
        For each total sample budget, we compare three reconstruction pipelines: (i) non-adaptive RBM tomography with a fixed measurement schedule, (ii) adaptive RBM tomography with data-driven measurement selection, and (iii) conventional maximum-likelihood estimation (MLE), which reconstructs the quantum state by maximizing the likelihood of the observed measurement data subject to physicality constraints. We quantify the performance by evaluating the infidelity (a difference measure between quantum states) for varying numbers of total samples, while also comparing total computational runtimes. This study aims to delineate the regimes in which adaptive RBM-based QST provides measurable gains over the non-adaptive design and MLE under identical measurement constraints.

        Speaker: Riza Fazili (University of Ottawa)
      • 74
        Thermalization of a quantum circuit

        Thermalization is a common phenomenon in classical statistical mechanics. We encounter this every time we pour milk into a coffee cup or release a gas into a larger volume. The system loses the memory of its initial conditions and achieves the equilibrium statistical value. Quantum systems normally display a unitary evolution in the time-dependent case. I show an application of the arguments for thermalization applied to quantum algorithms for linear algebra. The result is to remove wavefunction preparation in several examples.

        This research was undertaken, in part, thanks to funding from the Canada Research Chairs Program (CRC-2021-00257). This work has been supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) under grants RGPIN-2023-05510 and DGECR-2023-00026.

        Speaker: Thomas Baker (Department of Physics & Astronomy and also of Chemistry, University of Victoria)
      • 75
        Discussion
    • (DNP) M2-6 Nuclear Astrophysics | Astrophysique nucléaire (DPN)
      • 76
        (α,n) reactions and the origin of the first r-process peak

        The origin of the heavy elements of the first r-process peak, between strontium and silver, observed in Galactic halo stars (limited-r stars) remains an open question [1]. Neutrino-driven winds of explosive environments, either neutron- (weak r-process/α-process) or proton-rich (νp-process) present a viable option. In this talk, I will discuss how we can distinguish between different scenarios by constraining the nuclear physics uncertainties, particularly the (α,n) reaction rates in the weak r-process, and comparing nucleosynthesis models to the abundance patterns observed in limited-r stars [2]. I will discuss current experimental campaigns aimed at measuring key (α,n) reaction rates [3,4], and and how new, high-quality spectroscopic data from limited-r stars will provide essential constraints for the next generation of astrophysical models.

        References
        [1] A. Psaltis et al., Astrophys. J 935, 27 (2022).
        [2] A. Psaltis et al., Astrophys. J 966, 11 (2024).
        [3] M. Williams et al., Phys. Rev. Lett. 134, 112701 (2025).
        [4] C. Fougères et al., Astrophys. J 983, 142 (2025).

        Speaker: Thanassis Psaltis (Saint Mary's University)
      • 77
        Absolute cross section measurements for astrophysical reactions with the EMMA recoil mass spectrometer at TRIUMF

        The transmission efficiency of the EMMA recoil mass spectrometer at TRIUMF has been measured to enable direct absolute cross section measurements of astrophysical reactions. I will describe the measurements, the mathematical models of the spectrometer acceptance as a function of angle and energy/charge derived from them, and some recent determinations of astrophysical reaction cross sections.

        Speaker: Dr Barry Davids (TRIUMF)
      • 78
        The 18O(α, γ)22Ne reaction measurement with DRAGON in inverse kinematics

        Have you ever wondered how all the elements we find here on Earth and in the universe were created? Nearly all naturally occurring elements are produced via nuclear reactions in the interiors of the stars. Half of the elements heavier than iron are synthesized in the slow neutron capture process (s-process), which occurs mainly in two astrophysical sites: asymptotic giant branch (AGB) stars (main s-process) and massive stars during core helium burning and shell carbon burning (weak s-process). The $^{18}$O(α, γ)$^{22}$Ne reaction is a key link in determining the availability of $^{22}$Ne in the stellar environment, which affects the amount of neutrons available for the s-process through the $^{22}$Ne(α, n)$^{25}$Mg reaction. The $^{18}$O(α, γ)$^{22}$Ne reaction was measured in inverse kinematics for the first-time using the DRAGON recoil separator at TRIUMF, Canada’s particle accelerator centre. In this talk, I will present the scientific motivation for studying this reaction, the experimental setup, and progress on the analysis so far. I will also discuss potential future plans for additional measurements of the $^{18}$O(α, γ)$^{22}$Ne reaction with DRAGON.

        Speaker: Dhruval Shah (McMaster University)
      • 79
        Investigation of Decay Channels from the 6.15 MeV Resonance in ¹⁸Ne Using the ¹⁷F+p Reaction with ACTAR TPC

        Type I X-ray bursts and classical novae are powered by thermonuclear reactions on the surface of accreting compact objects, where breakout reactions from the hot CNO cycle play a critical role. One such reaction is ¹⁴O(α,p)¹⁷F, which at typical novae temperatures is dominated by a resonant state at Ex = 6.15 MeV in ¹⁸Ne. While the energy and spin of this resonance are well established, its decay scheme remains uncertain, with previous studies reporting inconsistent results for a possible two-proton (2p) decay branch.

        To investigate the decay modes from excited states in ¹⁸Ne, a resonant scattering experiment was performed at TRIUMF using the Active Target and Time Projection Chamber (ACTAR TPC). A radioactive ¹⁷F beam with an energy of 5.5 MeV/u was delivered into a hydrogen-based gas mixture, allowing simultaneous detection of charged reaction products. The use of a TPC is essential as it enables full kinematic reconstruction of events and unambiguous particle identification for single-proton, two-proton, and α decay channels.

        In this presentation, I will present the experimental approach, track reconstruction techniques, and ongoing analysis aimed at constraining the relative contributions of different decay modes of the 6.15 MeV resonance in ¹⁸Ne.
        This work was performed in collaboration with GANIL (France) and TRIUMF (Canada).

        Speaker: Fatima Aljarrah (University of Regina)
      • 80
        Beta-Delayed Charged-Particle From 20Mg

        One of the most important nuclear reactions in astrophysics is the 15O(α,γ)19Ne(p,γ)20Na reaction, which provides a possible breakout pathway from the hot CNO cycle in stars. Studying this reaction directly in the laboratory is challenging, instead, an indirect study using β-decay proton and α decays of 20Mg was recently performed at TRIUMF. The experiment used the Gamma-Ray Infrastructure for Fundamental Investigations of Nuclei (GRIFFIN) gamma-ray spectrometer and, for the
        first time, the Regina Cube for Multiple Particles (RCMP), a newly developed silicon detector array designed to detect low-energy protons and alpha particles. This setup enables the most sensitive search to date for rare decay branches and gamma-ray transitions from astrophysically important states. My thesis focuses on calibrating the RCMP array and analyzing this new high-statistics dataset to constrain the properties of resonances that play a key role in stellar nucleosynthesis.

        Speaker: Sydney Plante (University of Regina)
    • (DPE) M2-7 Student-centered education | Éducation centrée sur l'élève (DEP)
      • 81
        What do high school teacher want 1st year university instructors to know?

        The transition from high school to first-year courses is fraught with growing challenges. While university instructors notice a widening gap in foundational skills, high school educators face evolving classroom realities and are told to “start where the student is at”. Neither side has a perfect solution, and both sides are frustrated. I will share some observations gathered from high school physics teachers. This session serves as an open, yet constructive, forum for educators to share their observations, assumptions and intentions. By examining where our expectations diverge and where our efforts overlap, we will have a necessary conversation about how to better support students and educators navigating this increasingly difficult step.

        Speaker: Michelle Lee (OCDSB)
      • 82
        Tomorrow’s scientists: How a new program at SNOLAB is engaging youth in STEM

        Developing scientific inquiry skills in elementary and middle school students is critical for fostering long-term engagement with science and building scientific literacy in the next generation. With support from NSERC PromoScience funding, the SNOLAB education team has developed a new program for students in Grades 4–8 that emphasizes authentic, hands-on learning connected to Canada’s deep underground science laboratory.

        This program introduces students to the scientific method and the world of research by guiding them through six sessions designed for in-person or virtual classroom delivery. Students learn about asking testable questions, designing experiments, interpreting results, and communicating what they found. The proposed session will present a program overview, share the initial outcomes from the pilot of the program, and discuss plans for intentional program expansion.

        Session participants will be invited to take part in group learning opportunities, discussions on the value of early education programming to the field of physics, and how physicists can support inquiry-based science education at the elementary and middle school levels.

        Speakers: Blaire Flynn (SNOLAB), Rachel Richardson (SNOLAB)
      • 83
        From Collaborative Competition to Community: Five Decades of the UBC Physics Olympics

        This presentation examines how the UBC Physics Olympics (https://physoly.phas.ubc.ca/) has evolved into one of Canada’s largest and most sustained physics outreach initiatives, directly aligned with the CAP’s mission to promote physics, support education and training, and strengthen the physics community. Each year, more than 80 teams from across British Columbia (often with multiple teams from the same school) participate, with each team comprising approximately 30 students. The event actively supports pedagogical innovation by engaging students in intellectually demanding, team-based challenges that include experimental laboratory tasks, Quizzics conceptual competitions, Fermi problems, and sophisticated pre-build engineering challenges prepared well in advance. New tools and approaches, such as smartphone-based investigations using Phyphox, are intentionally embedded to promote modelling, approximation, experimental design, and reasoning under uncertainty. Supported by over 80 volunteers, undergraduate and graduate students, faculty, and staff, the Physics Olympics fosters sustained collaboration between physicists, physics educators, and teachers. Notably, many former participants have gone on to become physics students and secondary physics teachers, highlighting the event’s long-term impact on the physics education pipeline and the larger community.

        Speaker: Marina Milner-Bolotin (The University of British Columbia)
    • (DPMB) M2-8 | (DPMB)
      • 84
        Differentiation of Bacterial Species in Biomedical Specimens Using Laser-Induced Breakdown Spectroscopy

        Our research group has previously demonstrated the ability to detect bacteria within numerous types of biological media to a high degree of accuracy using laser-induced breakdown spectroscopy (LIBS). This presentation will describe our efforts to differentiate several bacterial species using this spectroscopic method in addition to detection. Experiments were conducted on Escherichia coli, Staphylococcus aureus, Mycobacterium smegmatis, Enterobacter cloacae, and Streptococcus salivarius. These pathogens were chosen as they can induce bacterial meningitis, a life-threatening infection of the membranous tissue surrounding the spinal cord and brain. There is a need for a rapid, convenient, and accessible diagnostic technique capable of accurately distinguishing between bacterial species in cases of such infection. This would allow physicians to prescribe appropriate antibiotic therapy in a timely manner.

        Known aliquots of the aforementioned bacteria were spiked into blood, urine, and artificial cerebral spinal fluid (aCSF). A custom centrifugation system deposited a thin bacterial film onto a nitrocellulose medium. Optical emission spectra were produced by ablating the bacterial film with a 1064 nm, 9 ns Nd:YAG laser, creating a high-temperature microplasma. Approximately 26,500 cells were ablated with each laser pulse.

        Classification of bacterial species based on their LIBS spectra was accomplished with various chemometric methods, namely partial least squares discrimination analysis (PLS-DA) and an artificial neural network with principal component analysis preprocessing (PCA-ANN). Two separate datasets were used in the analysis. For PCA-ANN, the full optical spectrum, ranging from 205 nm to 590 nm, was utilized. For PLS-DA, the intensities of 15 emission lines from Ca, Mg, Na, C, and P and ratios of those intensities formed datasets with 107 total variables.

        The classification accuracies of the resulting analyses will be presented, providing insight into the usefulness of LIBS in discriminating between bacterial species when they are present in human specimens.

        Speaker: Ms Lauren Dmytrow (University of Windsor)
      • 85
        Direct Measurement of Radon Diffusion in Biological Tissue Using Low-Background Techniques

        Radon (222Rn) is a naturally occurring radioactive noble gas. It is purported to be the second leading cause of lung cancer in Canada and the USA. Exposure is currently assumed to be confined to the respiratory epithelium following inhalation. However, as an inert gas, radon may diffuse into porous media, water, and possibly biological soft tissues. Direct measurements of radon movement in and out of biological tissue are lacking. This project will test whether radon can diffuse into and emanate from living biological tissue.

        We are in the conceptual stages of developing a novel experimental workflow at SNOLAB to directly measure radon emanation from biological samples using a sealed, radon-impermeable collection system paired with a high-sensitivity radon emanation board. Collected radon will be quantified using established radon emanation techniques pioneered by SNOLAB. Briefly, the radon board consists of a cryogenic dual-trap transfer system and low-background scintillation (ZnS) Lucas cell alpha counting, enabling detection sensitivities of tens of radon atoms per day. As a proof-of-concept, a synthetic soft-tissue mimetic will be exposed to elevated radon atmospheres and compared with air-exposed controls, with nitrogen flushing used to control residual airborne radon. Time-resolved radon off-gassing measurements will then be used to measure radon retention within the samples.

        Following validation, this approach will be extended to whole-body and organ-specific measurements in a small animal model exposed to a controlled radon environment. Quantifying radon mobility in biological tissue has the potential to challenge existing assumptions about radon dose localization and inform improved biokinetic and dosimetric models relevant to radiation protection, low-dose risk assessment, and underground radiation environments.

        Speaker: Michel Lapointe (SNOLAB)
      • 86
        Bioluminescence as a Biomarker for Assessing Environmental Health

        Mechanically-stimulated bioluminescence in Pyrocystis fusiformis acts as a signal for probing environmental health using time-correlated single-photon counting techniques. The York University PCS 121, a portable biosensing technique utilizing Silicon Photomultiplier (SiPM) technology, in which a mechanical stimulus triggers the bioluminescence reaction, was used to measure photon emission from the marine dinoflagellate species, P. fusiformis, under controlled excitation. This is a photon-counting instrumentation, incorporating Geiger-mode SiPM detectors and proprietary algorithm that is very sensitive in time-resolved photon count detection, with enhanced low signal-to-noise performance and integrated piezoelectric mechanical stimulation. Mechanical stimulus is applied to the species, resulting in bioluminescence.

        Cultures of P. fusiformis were exposed to varying concentrations of nitrates (NaNO3; 0.883-10.9 mM) and phosphates (NaH2PO4; 36.3 μM - 3.63 mM) and prepared in a 1:1 ratio of cells + f/2 medium to chemical solution. Dose-dependent response was quantified through live cell counts and measurements of bioluminescence behaviour via photon-counting instrumentation, assessing both emission intensity and decay properties. Phosphate exposure produced measurable changes in bioluminescent properties and cell counts, indicating significant chemo-physiological response. Across the tested nitrate conditions, no significant changes in cell viability were observed, suggesting relative tolerance of P. fusiformis to nitrate-based stress within this range. This study demonstrates a reproducible approach to the biophysical assessment of environmental health as bioluminescence is coupled with photon counting techniques.

        Speaker: Sofia Puszynska (York University)
      • 87
        Polarization resolved second harmonic generation microscopy reveals molecular disorder in crosslinked collagen fibrils

        Crosslinking of collagen is a critical mechanism in bioengineering and human health. Crosslinks are used to strengthen in-vitro assembled collagen materials, yet excessive crosslinking in-vivo, due to factors such as aging and high blood sugar, can lead to functional loss in human tissues. There is a lack of understanding regarding the effects of crosslinking on collagen molecular structure. The triple helix structure of collagen has traditionally only been accessible using X-ray scattering techniques, measured over macroscopic regions. Second harmonic generation (SHG) microscopy is uniquely positioned to measure the triple-helix structure of collagen molecules with sub-micron resolution.

        In this work, we used a laser-scanning polarization-in polarization-out second harmonic generation (PIPO-SHG) microscope to investigate changes in the collagen triple-helix structure under different crosslinking conditions. Collagen fibrils were assembled in-vitro from type I bovine telocollagen, then incubated in known crosslinkers glutaraldehyde, methylglyoxal, or ribose.

        By fitting to SHG intensity images at different polarizations, we measured a decrease in molecular tilt angle from 44.9 ± 0.3° to 39.7 ± 0.6° after glutaraldehyde treatment. Similar trends were observed for methylglyoxal and ribose. This suggested a stretching of the triple helix due to intermolecular strain. However, the glutaraldehyde-treated fibrils had an angle very close to the SHG “magic angle” of 39.2°, observed when there are significant levels of molecular disorder (Simpson, 1999). To distinguish between the two interpretations, we imaged crosslinked fibrils with atomic force microscopy (AFM), which revealed that the fibril D-band repeat remained unchanged. This indicates that the decrease in molecular tilt angle after collagen crosslinking is due to molecular disorder, rather than a strain-induced shift in the triple helix structure. Our work offers an empirical measurement of the molecular structure of collagen crosslinking, with implications for in-vitro collagen material design and quality control as well as aging and disease mechanisms in in-vivo collagen tissues.

        Speaker: Benjamin Hansson
      • 88
        Probing the Surface Chain Length Distribution of Phytoglycogen Nanoparticles Using AFM Force Spectroscopy

        Phytoglycogen (PG) is a natural biopolymer extracted from sweet corn that is composed of highly branched chains of glucose molecules. Its special dendritic chain architecture results in monodisperse nanoparticles with a diameter of 42 nm that are decorated with short chains, and its chemically simple composition results in biodegradability and nontoxicity. This combination of properties makes PG desirable for a variety of biophysical, biomedical, and cosmetic research and applications. We have used atomic force microscopy force spectroscopy (AFM-FS) to study the morphology and mechanical properties of individual PG particles covalently bonded to a flat gold substrate using an intervening self-assembled monolayer of 4-mercaptophenylboronic acid [1]. In AFM-FS, the AFM tip is repeatedly pressed into and retracted from the sample while rastering over a small area of the sample. This allows the measurement of the topography of the sample as well as different mechanical properties, including the sample stiffness and the adhesion forces between the AFM tip and the sample. Recently, we developed an analysis of the AFM-FS data using a supervised machine learning (ML) classifier to identify the location of individual PG nanoparticles on the gold substrate [2]. The ML classifier uses features engineered from individual force-distance curves at each pixel in an AFM scan that describe the mechanical properties of the material interacting with an AFM tip at that location. In the present study, we use the supervised machine learning approach to quantify the AFM tip-PG particle adhesion by identifying and analyzing the force peaks observed during the retraction of the AFM tip in terms of the extensible worm-like chain model. This allows the determination of the contour length distribution of the short chains that decorate the PG particle surface, for which we observe a well-defined peak that is consistent with the average length of the short chains measured using small angle neutron scattering (SANS) [3].

        [1] B. Baylis et al., Biomacromolecules 22, 2985 (2021).
        [2] B. Baylis et al., Soft Matter (2026).
        [3] J. Simmons et al., Biomacromolecules 21, 4053 (2020).

        Speaker: Nishel Alexander
      • 89
        The vertebrate ear is an active linearistic subatomic displacement sensor

        The ear is a complex system that acts as a mechanoelectrical transducer, allowing organisms to sense acoustic signals emanating from the world around them. To improve its ability to detect sound, the ear is active in that it uses metabolic energy to generate force that counteracts both frictional forces and noise (e.g., viscous and thermal properties of the inner ear fluid). The healthy vertebrate ear can even generate and emit coherent sound (spontaneous otoacoustic emission, or SOAE) that can be measured in the ear canal using a sensitive microphone.  Significant debate however exists about the collective biophysical processes of the hair cells underlying this power amplification, as well as the related SOAE generation mechanisms. One proposed framework considers the ear as collection of oscillatory elements poised on the verge of an instability: a negative damping is paired with a stabilizing nonlinearity that could allow for self-sustained oscillation (e.g., a Hopf bifurcation). A key prediction of such a model is that the response to external driving force is highly compressive: sound-evoked motions would grow as the cube root of the stimulus level. Here, to test this prediction, we use scanning laser Doppler vibrometry (sLDV) to noninvasively measure the spontaneous and sound-evoked motions of the anole lizard tympanic membrane (TyM, or "eardrum"). We observed spontaneous oscillations (SO) of the TyM as a series of stable spectral peaks that appear consistent with acoustic SOAE measures. These oscillations are readily observable, even with amplitudes as low as 10-20 pm. For reference, the diameter of a hydrogen atom is ~100 pm, and thus the low noise floor of our measurements allow us to observe the sub-atomic nature of TyM displacements down near threshold. While sound-evoked motions appear slightly compressive near SO peak frequencies, it was generally observed that level growth was close to linear. As a heuristic benchmark for a simplified "ear", we also computationally explored several related scenarios (e.g., a coupled pair of Hopf and damped harmonic oscillators). Ultimately our observations contrast predictions of the Hopf-based framework, indicating that the vertebrate ear acts in a linearistic fashion near threshold.

        Speaker: Esther Sule
    • (PPD) M2-9 | (PPD)
      • 90
        Measurement of Reactor Antineutrino Oscillation at SNO+

        SNO+ is a low background multi-purpose detector 2 km underground in Sudbury, Ontario. It is able to detect electron antineutrinos with energies down to 1.8 MeV via inverse beta decays. With the majority of the incoming electron antineutrino flux at SNO+ originating from three nuclear reactors 240, 350 and 355 km away, the detector is well situated to measure neutrino oscillation. In particular, it can measure the survival probability of these electron antineutrinos, which depends on so-called “long baseline” oscillation parameters, namely the mixing angle $\theta_{12}$ and the mass-squared difference $\Delta m^2_{21}$. Using 1.46 ktonne-years of data from May 2022 to July 2025, $\Delta m^2_{21} = \left(7.93_{-0.24}^{+0.21}\right) \times 10^{-5}$ eV$^2$ was measured, with a precision approaching the previous result from the KamLAND experiment $\left(7.54_{-0.18}^{+0.19}\right) \times 10^{-5}$ eV$^2$, and providing a valuable cross-check to the recent measurement by the JUNO collaboration. In addition, electron antineutrinos produced from radioactive decays inside the Earth were detected. This geoneutrino flux was simultaneously measured to be $49_{-12}^{+13}$ TNU at SNO+. The result was obtained by using a novel classifier to distinguish the positron annihilation engendered by inverse beta decays from their primary background: ($\alpha$, n)-induced proton recoils. This provides the only measurement of the geoneutrino flux in the Americas, and the third location worldwide, adding crucial data to a global calculation of the mantle’s radiogenic heat production.

        Speaker: James Page (Queen's University)
      • 91
        Spatial Reconstruction of EXO-200 Events using Deep Learning Network

        The EXO-200 experiment, a 100-kg-class liquid xenon time projection chamber, operated from 2011 to 2018, in searching for neutrino-less double beta decay of 136Xe. This is a process whose observation would establish the Majorana nature of neutrinos and help constrain their absolute mass scale. Precise reconstruction of event energy and position is essential for this search. Spatial information is used to define fiducial volumes, suppress external backgrounds, and discriminate between signal-like single-site events and background multi-site interactions.
        In this work, we apply and investigate use of deep learning techniques to reconstruct event position of simulated electron events from EXO-200. A deep neural network (DNN) is developed to perform reconstruction directly from electronics waveforms, i.e. raw detector quantities. While previous studies have primarily relied on gamma-ray calibration events, this analysis focuses on electron events, which more closely match the topology of double beta decay signals. The DNN performance is compared to the standard EXO-200 reconstruction methods, showing at least a factor of two improvement in spatial resolution. The results demonstrate the potential of deep learning–based reconstruction for future liquid xenon experiments such as nEXO and motivate further development for application to experimental data.

        Speaker: Tanmay . (University of Windsor)
      • 92
        Signatures of Composite Dark Matter in Bubble Chambers

        This presentation will show a new analysis of bubble chamber dark matter detectors which could be used to discover composite dark matter. A bubble chamber contains a volume of superheated fluid which nucleates a bubble when enough energy is deposited in the fluid. Traditional analysis assumes that a bubble is nucleated from a single, high-energy interaction with a dark matter particle. Composite dark matter is a class of dark matter models that contain binding forces which clump dark matter particles together, similar to nuclei in atoms. If the composite binding energy is small, the composite is "loosely bound" and the constituents can individually interact with nuclei in a detector. In this scenario, weakly interacting constituents can collectively deposit a large amount of energy in a small region of a detector from a composite passing through. Bubble chambers offer a unique sensitivity to such an effect due to the macroscopic nature of bubble nucleation, as opposed to exclusive single-event discrimination. Performing a novel analysis on bubble chamber sensitivity to loosely bound composites allows for probing of weaker dark matter interaction strengths and lighter constituent masses using existing experiments.

        Speaker: Alex Hayes (Queen's University)
      • 93
        Calibration of the SNO+ Scintillator Phase with an AmBe Neutron Source

        The SNO+ experiment is a kilotonne-scale liquid scintillator (LS) neutrino detector located 2 km underground at SNOLAB in Sudbury, Ontario. The 780 tonnes of LS is housed inside of a 12 m diameter acrylic vessel (AV). Surrounding the AV is a 17 m diameter array of 9362 inward-looking photo-multiplier tubes which detect scintillation light produced by particle interactions in the LS. Within its broad physics program, SNO+ detects anti-neutrinos through an inverse beta decay (IBD) reaction, producing a characteristic coincidence signal that can be easily separated from most backgrounds. This allows SNO+ to make two key measurements: the determination of a subset of neutrino mixing parameters from reactor anti-neutrino oscillations, and the flux of geo-neutrinos emitted from the decay of unstable elements in the Earth. The SNO+ collaboration has recently released improved measurements for both.

        An important component of the improved anti-neutrino analysis was the use of a deployed $^{241}$Am-$^{9}$Be neutron calibration source, which produces a coincidence signal similar to that of IBD reactions. Validation of the detector response to these events enabled, for the first time, the use of a novel timing-based event classifier to distinguish IBD events from a class of background delayed-coincidence events produced by $(\alpha,n)$ interactions. This talk will summarize the SNO+ AmBe calibration campaign and analysis of the calibration data.

        Speaker: Anthony Allega (Queen's University)
      • 94
        Performance results of germanium high-voltage detectors tested in the Cryogenic Underground TEst facility

        The Super Cryogenic Dark Matter Search (SuperCDMS) experiment aims to search for low-mass dark matter using the new generation high-voltage (HV) detector design with germanium and silicon targets. A subset of these HV detectors (four Ge and two Si) were tested in the Cryogenic Underground TEst (CUTE) facility at SNOLAB, which provided the first opportunity for an extended operation of these detectors and a comprehensive study of their performance in a low background environment similar to SuperCDMS SNOLAB. Here, I will report on analysis methods developed and results for low and high-energy calibration (sub-keV to hundreds of keV range) for the germanium detectors. We also assess the energy resolution and the performance under high voltage bias (up to 90 V).

        Speaker: Ruchi Soni (Queen's University)
    • 15:45
      Health Break | Pause santé
    • (DAPI) M3-1 | (DPAI)
      • 95
        Microscopic Dust, Macroscopic Downtime: The impacts of micron sized particulates in superconducting particle accelerators.

        Particle accelerators are an invention of the early 20th century that have revolutionized our understanding of the universe and pushed the boundaries of our technological capabilities. These machines not only help scientists probe fundamental physics questions, but are also pillars in many industrial and medical applications. Acceleration of charged particles can be achieved by using superconducting radiofrequency (SRF) devices known as cavities, which apply variable electromagnetic fields to particles under cryogenic conditions. A key limitation to the performance of SRF based accelerators is contamination; external particulates (aka dust) present on the cavity surface trigger field emission, a phenomenon where electrons tunnel through the cavity surface due to strong electric fields. These rogue electrons limit the beam energy delivered to users and can even saturate machine protection systems. Field emission is actively observed at the TRIUMF electron linear accelerator (e-Linac); this accelerator shows a progressive onset of field emission in its SRF cavities throughout operation, despite cavities undergoing stringent cleaning procedures prior to installation. The reliable operation of particle accelerators is critical for the scientific and industrial applications of accelerator facilities worldwide, therefore understanding this phenomenon and its root causes is imperative.

        My PhD work centers around the idea that particulates are actively generated by various accelerator components during operation, and migrate into SRF cavities after installation, leading to the increased onset of field emission. The dynamics of micron scale particulates in vacuum is influenced by their electrostatic charge, and the radiation environment of a particle accelerator provides an ideal opportunity for them to gain such charge. However, fundamental parameters such as composition and charge to mass ration of these grains remain largely unknown and are unique to each accelerator environment. I will present a series of experiments performed to study the charging and lofting of micron sized particulates in vacuum using an in-vacuum particle counter. The results of these experiments will shed light on the mechanisms that drive particulate migration and inform mitigation techniques to better maintain the performance of high energy SRF based particle accelerators.

        Speaker: Aveen Mahon (University of Victoria | TRIUMF)
      • 96
        Xray Ptychography at Canadian Light Source.

        Ptychography is a lensless X-ray diffraction microscopy technique in which the sample is scanned at pre-defined positions collecting far-field diffraction patterns at each of these scanning positions. The diffraction patterns along with the information regarding the scan positions are then processed using iterative phase retrieval algorithms resulting in high resolution complex transmission function of the sample. Unlike techniques like transmission X-ray microscopy (TXM), Fourier transform holography (FTH) or scanning transmission X-ray microscopy (STXM), the resolution doesn't depend on the focusing optics or the reference aperture, but on the detectable signal contained in the measurable inverse space. While other parameters such as beam stability and stage accuracy play an important role, the fundamental limitation on obtainable resolution is the measurable inverse space. We have developed extensive ptychography derived 2D, 3D imaging capabilities at CLS. We can conduct Ptychography and Ptycho-Tomography at SM in the soft X-ray regime. We have demonstrated first ever hard X-ray ptychographic imaging in Canada by establishing this experiment at BXDS-IVU beamline, we are currently pushing this into the 3rd dimension. I will report on various details leading up to this exciting experimental capability at Canadian Light Source.

        Speaker: Venkata Sree Charan Kuppili (Canadian Light Source)
      • 97
        Cold Noise Mechanism in ATLAS ITk Strip Modules

        The ATLAS collaboration is currently preparing to replace the present Inner Detector with the upgraded Inner Tracker (ITk) for Run-IV of the Large Hadron Collider (LHC). In 2022, modules for the central strip tracker were found to have clusters of electrical noise outside specifications when tested at normal operating temperatures (-35°C), dubbed “Cold Noise”. The cause of Cold Noise lies in the interactions of vibrating electronic components with material interfaces and glue chemistry, which requires detailed understanding of all the involved systems that make up an ITk module. Thus, the study of Cold Noise involves topics spanning electronics, mechanics and solid state physics which are applied in parallel. This work probes complex interfacial interactions that can generate electrical signals, effects that are readily picked up by the high sensitivity of modern sensors, acting as a new and unexpected source of noise. This talk presents an overview of our current understanding of the Cold Noise mechanism with recent results focused on surface charge generation, readout, glue properties and structural analysis.

        Speaker: Richard Oyeremi Salami (Simon Fraser University (CA))
      • 98
        New or Improved? Critical build decisions in the ATLAS ITk Strip End-Cap

        The High-Luminosity Large Hadron Collider is set to come online in 2030, producing more than three times the data than its current rate. As such, the ATLAS detector will need to replace its Inner Detector with the Inner Tracker, a full-silicon system. Canada is responsible for building and testing 22% of the strip end-cap subdetector, due to be installed at CERN in 2028.

        In 2024, as the production phase was scheduled to begin, the ATLAS Inner Tracker discovered a major reliability problem, where a discrepancy in material coefficients led to cracked sensors when cooled down to operating temperatures. The following analysis, based on experimental and simulated data, yielded two possible solutions: a major design change providing a large safety margin and a minor design change providing a smaller safety margin. In early 2025, due to time constraints and low availability of prototypes, decisions were made based on a limited data set. Now, a year later, multiple parts have been assembled and more information is available.

        This talk will outline the two approaches under consideration, the decision process used to choose a final design and how the situation has evolved since then.

        Speaker: Emily Filmer (TRIUMF (CA))
      • 99
        Silicon Pixel Detectors for the MOLLER Experiment

        The MOLLER Experiment at Jefferson Lab, Virginia, will utilize parity-violating electron scattering to measure the asymmetry in Moller scattering, and extract the weak mixing angle with unprecedented precision. The predicted asymmetry is ~33 parts per billion, with a target uncertainly of 0.8 ppb, corresponding to a 2.4% determination of the electron’s weak charge and a projected precision of δ(sin^2 (θ_W) ) = ±0.00028. This measurement will provide a stringent test of the Standard Model while offering sensitivity to new neutral current interactions at the Multi-TeV scale, making MOLLER a powerful probe of physics beyond the Standard Model.

        To support these precision goals, pixel detectors based on High Voltage Monolithic Active Pixel Sensors (HV-MAPS) are being integrated into two key MOLLER subsystems. Each silicon sensor has an active area of ~20 x 20 mm^2. With pixel sizes of ~80 x 80 microns and thicknesses as low as 50 microns, they provide fine spatial resolution with minimal material for unwanted scattering. Operation at high bias voltages enable fast charge collection by drift, allowing high detection rates in high-rate environments like MOLLER.

        A family of HV-MAPS (known as MuPix) will be implemented in two detector subsystems for MOLLER. In the Compton Polarimeter, planes of pixel detectors will provide tracking and profile measurements to support precise beam polarization monitoring. For the Main Integrating Detector, the planes are installed behind a subset of the overall Cherenkov detector modules, and operate as profile detectors, measuring the spatial distribution of scattered electrons across the detector array. These measurements provide critical input for alignment, acceptance studies, and control for systematic effects in the asymmetry extraction.

        This work demonstrates the implementation of these pixel sensors in both subsystems, including the design and analysis of mounting and cooling structures under projected operating conditions. The talk will also cover the development of the readout architecture, which utilizes the low-power GigaBit Transceiver (lpGBT) and VTRx+ optical transceiver at its core for high-speed communication with these chip arrays. A specialized pick-and-place system is being developed for high precision sensor assembly during detector module construction.

        Speaker: Dr Mohammad Laheji (University of Manitoba)
    • (DPMB) M3-10
      • 100
        Optical Biomodulation Without Optogenetics

        Cellular behaviour modulation has traditionally been studied through genetic and epigenetic manipulation. However, non-genetic approaches to control behaviour remain mostly underexplored. We demonstrate here, for the first time, the use of membrane-incorporated molecular photo switches, in the form of azobenzene-cholesterol derivatives, to optically control cellular behavioural processes in the immotile algal dinoflagellate Pyrocystis fusiformis (P. fusiformis). P. fusiformis produces bioluminescence when activated by shear stress imposed on the outer plasma membrane. This unique property, combined with the ease of culturing, makes P. fusiformis the ideal model cellular organism for our study. Previously, azobenzene derivatives (photo switches), which change molecular conformation (isomerization) upon irradiation at specific absorption wavelengths, have been shown to localize to the plasma membranes of human red blood cells and alter their membrane morphology. Analogously, we aim to use optical stimulation to induce isomerization of membrane-integrated photo switches to induce bioluminescence in P. fusiformis by generating optically derived mechanical shear stress on the outer plasma membrane. This discovery would constitute a novel approach to altering cellular behaviour without genetic or epigenetic manipulation. We will show first stages in demonstrating proof of concept of this novel optical bio-modulation method including demonstration of localization of photo switches to the membranes of P. fusiformis cells and light directed motion of an immotile P. fusiformis cell upon irradiation at the isomerization wavelength of the localized photo switch at λ = 365 nm. This phenomenon constitutes a new method for optical steering of biological cells and has broad applicability in pharmaceutics and biomodulation.

        Speaker: Seyed Masoumi Lari (York University)
      • 101
        Towards Non-invasive Assessment of Tissue Stiffness Using Diffuse Correlation and Time-resolved Near-infrared Spectroscopy

        Introduction: Brain tissue stiffness has emerged as a promising biomarker for early diagnosis of neurodegenerative changes. Diffuse Correlation Spectroscopy (DCS) has the potential to estimate tissue stiffness non-invasively, as the decorrelation time of scattered light speckles (Tau 2) correlates with Young’s Modulus. While decorrelation time is dominated by dynamic scatterers, Tau 2 in tissue still decorrelates from photon pathlength divergence induced by scattering. However, this relationship is likely influenced by more than one physical parameter. More specifically, Tau 2 is also dependent on tissue scattering coefficient (μ_s^'), which is influenced by density and size of microstructures and can be measured with time-resolved near-infrared spectroscopy (trNIRS). We hypothesize that changes in μ_s^' also alter Tau 2 independent of variations in tissue stiffness. The objective of this study is to investigate the relationship between Tau 2 and μ_s^', using Intralipid (IL) phantoms to mimic varying degrees of tissue scattering while stiffness remains constant.

        Methods: The phantoms were prepared by mixing increasing concentrations of IL in distilled water (0.5% to 1.5% in steps of 0.1%). For DCS measurements, a 785-nm, long-coherence length laser was used for emission, and a single-photon counting module was used for detection. Measurements for trNIRS were acquired (760 nm) using a commercial system (PIONIRS). A constrained optimization routine was used to estimate Tau 2 from the intensity autocorrelation function measured with DCS and μ_s^' was measured from the distribution of times-of-flight measured with trNIRS, using an analytical model light propagation in turbid media.

        Results: As IL concentration increased, Tau 2 and μ_s^' exhibited negative correlation, with μ_s^' increasing as Tau 2 decreased. R2 of 0.733 demonstrated changes in Tau 2 are strongly correlated with μ_s^'.

        Conclusion: The preliminary findings of this study show a quantitative relationship between Tau 2 from DCS and μ_s^' from trNIRS. Therefore, optical assessment of tissue stiffness requires both measurements with DCS and trNIRS to provide a potential non-invasive tool for early detection of neurodegenerative diseases. Immediate future work will investigate the relationship between Tau 2, μ_s^', and Young’s Modulus in solid phantoms, mimicking variations in both scattering and stiffness.

        Speaker: Emma Zhang
      • 102
        Characterization of Radiochromic PCDA Formulations for Real-time Photon and Proton Dosimetry

        Pentacosa-10,12-diynoic acid (PCDA) and its Li salts are used in clinical radiotherapy across a broad range of modalities. The underlying chemical and structural characteristics of these materials however remain poorly understood within the medical physics community. In this work we systematically evaluate the real-time dosimetric performance of three PCDA-based radiochromic formulations, including two commercial analogues, and one novel formulation we developed, under high energy photon (6 and 10 MV) and proton (74 MeV) irradiation. Formulations were coated on polyethylene substrates and irradiated using custom fibre-optic setups enabling real-time transmission measurements at 1-2.5 cm depth. Two distinct absorbance peak positions, centered at ~635 nm ~ 674 nm, were observed as function of lithium molar ratio, introduced during formulation. The novel Li salt of PCDA (674LiPCDA) exhibited non-linear dose response resembling commercial EBT-3, while displaying plate like morphology like PCDA. Results of lithium incorporation demonstrated overall enhancement in apparent dose sensitivity relative to PCDA; however, 674LiPCDA, which has a higher lithium molar ratio compared to its commercial analogue 635LiPCDA, had a 68 ± 2 % lower apparent sensitivity at 25 Gy. All formulations exhibited reduced apparent sensitivity under proton irradiation relative to photons, consistent with LET-dependent quenching effects observed in commercial radiochromic films. Elemental analysis revealed consistent elemental composition for 635LiPCDA and 674LiPCDA, with nearly identical lithium content (1.66% and 1.69%, respectively). ATR-FTIR spectroscopy revealed the absence of the O-H stretching in 674LiPCDA, indicating that, unlike 635LiPCDA, the material is anhydrous and that the hydration plays a significant role in dose response. These results demonstrate that radiochromic dose response is governed by a complex interplay of small-molecule incorporation and chemical composition. This study establishes a framework for the controlled generation of radiochromic dosimeters, paving the way for broad-use, real-time in vivo dosimeter technologies to improve patient outcomes in radiotherapy.

        Speaker: Rohith Kaiyum
      • 103
        Development and Validation of a Wearable, Low-Cost Near-Infrared Spectroscopy System for Cerebral Hemodynamic Monitoring

        Access to brain monitoring technologies remains a critical need in low-income countries, where neonatal brain injury contributes substantially to permanent neurological impairment and mortality. To address this challenge, we introduced a low-cost, wearable near-infrared spectroscopy (NIRS) device developed by repurposing a commercially available optical biosensing evaluation module (MAXM86146EVSYS, Maxim Integrated). The device is battery-powered, wireless, compact, and provides enhanced depth sensitivity for measuring cerebral oxygenation in both neonatal and adult humans. This approach eliminates the need for custom hardware, allowing end users with minimal engineering expertise to reproduce the system while preserving robust signal quality.
        System performance was characterized through long-term stability testing and layered blood-phantom experiments. The capability of the device to monitor cerebral hemodynamics in adults was further evaluated in humans using carotid compression and hypercapnia challenges. Performance testing demonstrated stable operation with less than 1% signal drift over nearly four hours of continuous use. Two-layer blood phantom experiments showed enhanced depth sensitivity and the ability to detect deep-layer oxygenation changes beneath a 15 mm superficial layer, representative of adult extracerebral tissues. In vivo experiments produced expected cerebral hemodynamic and oxygenation responses, confirming sensitivity to physiologically relevant changes in cerebral tissue oxygenation. In addition, robustness across diverse skin pigmentation levels was evaluated in adults, demonstrating preserved physiological pulsatility despite reduced detected intensity at higher pigmentation levels.
        These results demonstrate the feasibility of adapting a commercially available optical biosensing module into a reliable NIRS-based cerebral oximeter, offering a practical solution for neonatal brain monitoring in low-resource environments. The device can be reproduced with minimal technical skills by replacing the factory-installed LEDs and mounting the module within a 3D-printed probe.
        This study was supported by Western University under the Frugal Biomedical Innovations program, NSERC Discovery Grants (RGPIN-2023-05561), and (RGPIN-2020-06856).

        Speaker: Saeed Samaei (University of Western Ontario)
      • 104
        The effect of skin tone on the sensitivity of infrared optical technologies

        Melanin pigment determines skin tone, where darker skin tones contain higher levels of melanin content. To examine the sensitivity of infrared optical technologies as a function of skin tone, 5 commercial pulse and cerebral oximeters were investigated on participants with varying skin tone (Brands: Masimo MightySat™, Blue Echo Care™, ToronTek-G64™, Apple Watch Series 10™, and NIRSport2™). The participants’ skin tones were determined by objectively measuring melanin content using a reflectance spectrophotometer, with skin tones ranging from Type I (Very Light) to Type VI (Dark) based on the Individual Typology Angle. Blood oxygen saturation (SpO2, %) and heart rate (HR, bpm) were measured before and after walking on a treadmill, to induce hypoxemia (SpO2 < 95 %). The gold standard pulse oximeter that is commonly used in clinical settings (Masimo MightySat™, wavelengths 660 nm and 940 nm, variable power), recorded a 5 % reduction in SpO2 and an increase in HR of 70 bpm across participants of all skin tones. Sensitivity measurements of the Apple Watch Series 10™ showed no change in SpO2 compared to the gold standard, across all participants. Compared to the gold standard, this device uses one of its wavelengths at 523 nm which is absorbed more by melanin and water compared to 660 nm wavelength, leading to poor device sensitivity. For participants with darker (Type 6) skin tone, the Blue Echo Care™ and ToronTek-G64™ pulse oximeters were not sensitive, showing no change in SpO2. Despite both pulse oximeters using similar wavelengths as the Masimo MightySat™, the low static power of the devices’ LEDs could explain their poor sensitivity for the darkest skin type. The cerebral oximeter generally measured a 1–2 % reduction in brain oxygen saturation across participants with skin tone types 1–5, and 0.5 % reduction for participants with the darkest Type 6 skin tone. These findings provide evidence for the potential for skin tone bias in the sensitivity of many infrared devices, which industry manufacturers need to be aware of. Future research will attempt to optimize near infrared device engineering in order to calibrate and correct for variations in individuals’ skin tone, toward personalized medicine and consumer products.

        Speaker: Dr Kayrel Edwards (York University, Department of Physics and Astronomy)
    • (PPD) M3-11 | (PPD)
      • 105
        Development of an ATLAS Run 3 Search for Magnetic Monopoles in p-p Collisions

        Magnetic monopoles remain a highly motivated yet experimentally elusive exotic particle. Point-like Dirac monopoles, along with other High-Electric-Charge Objects, could be produced in proton-proton collisions at the Large Hadron Collider and would generate a characteristic highly ionizing signature within the ATLAS detector. Building on a previous Run 2 analysis, I will present plans for a new ATLAS Run 3 search using a machine learning technique incorporating several additional detector-level quantities. Preliminary results suggest that this approach achieves more robust background separation and estimation, maximizing potential for the discovery of magnetic monopoles at the LHC.

        Speaker: Justin Kerr (York University (CA))
      • 106
        Oscillating Pseudo-Dirac Gauginos: A New Twist on Leptogenesis and Neutrino Masses

        In a $U(1)_{R-L}$-symmetric supersymmetric model, pseudo-Dirac bino and wino can act like right-handed neutrinos, generating the light neutrino masses through a hybrid Type I + III inverse seesaw mechanism. We investigate such a model to accommodate the baryon asymmetry of the universe together with neutrino masses. A pseudo-Dirac gaugino goes under particle-antiparticle oscillations. Possible $CP$ violation in bino decays, induced by mixing with the neutrinos, can be enhanced in bino--antibino oscillations. Focusing on a long-lived bino, we show that its oscillations and decays can generate the observed baryon asymmetry while the wino is responsible for generating the neutrino masses. This mechanism requires a decoupled mass spectrum with a bino of mass $M_{\widetilde{B}}\sim O({\rm TeV})$ and sfermions with mass $M_{\rm sf}\gtrsim 25$ TeV. Furthermore, for the bino to decay out-of-equilibrium before the electroweak sphalerons turn off, the messenger scale needs to be $\Lambda_M \sim O(10^7~ {\rm TeV})$. We discuss displaced-vertex signals at the LHC arising from such a high messenger scale.

        Speaker: Cem Murat Ayber (Carleton University)
      • 107
        Forward Jets with the ATLAS Detector: A Mismodeling Mystery

        The ATLAS detector, located at the CERN laboratory, measures the products of high energy proton-proton and heavy-ion collisions produced by the Large Hadron Collider (LHC). Given the high collision energy of 13.6 TeV produced by the LHC, the ATLAS detector is a prime location to study fundamental interactions of the universe. To help search for these phenomena, the ATLAS Collaboration uses a series of simulations of events identified with the detector. This simulated data can be used to predict the outcome of real data measurements and is therefore useful in the design of physics analysis. This methodology, however, can fall short if the simulation greatly mismodels the real data. For the simulation of the 2022-2026 run of the LHC, a major discrepancy was identified with the prediction of jets in the forward region of the detector. It was found that the models only predicted about half as many jets than were observed with the data taken. This differs greatly from the prediction of the 2015-2018 run, which showed a reasonable agreement with data. This under prediction is a notable issue as forward jets can be an important signature of certain events of interest. For example, the electroweak production of Z bosons at the LHC is a prominent signature that includes the production of two forward jets (EW Zjj). A lack of precision in the predicted observation of this identifying feature may lead to a decreased precision measurement of this production. This talk will examine the studies that have been performed to identify the source of the mismodeling and how it impacts analyses sensitive to forward jets, such as the EW Zjj analysis using data from both the 2015-2018 and 2022-2026 runs of the LHC.

        Speaker: Owen Darragh (Carleton University (CA))
      • 108
        Measurement of the Pion QCD Form Factor in e+e−→π+π− at sqrt(s)=10.58 GeV with the Belle II Detector

        The pion QCD form factor encodes the effects of the strong interaction in processes involving pions and provides a fundamental probe of non-perturbative and asymptotic Quantum Chromodynamics (QCD).

        Because first-principles calculations of non-trivial QCD interactions are challenging, empirically measured form factors play a crucial role in data-driven approaches. In particular, they can be used to improve hadronic modeling in event generators such as Pythia, leading to more accurate simulations of hadronic final states. Measurements of the pion form factor at high energies test the asymptotic behavior predicted by different QCD models, providing valuable benchmarks for theoretical calculations and computational techniques.

        The $e^+e^- \rightarrow \pi^+ \pi^-$ process is studied without initial-state radiation using data collected at a center-of-mass energy of 10.58 GeV by the SuperKEKB collider, the world’s highest-luminosity electron--positron collider. The final-state pions are reconstructed with the Belle II detector, a multipurpose detector located at the interaction point of the SuperKEKB beams.

        Signal events are selected using multivariate analysis (MVA) techniques and optimized using Monte Carlo simulations of the signal and dominant background processes. Systematic effects associated with the selection are evaluated using a control sample of $\tau^\pm \rightarrow (\rho^\pm \rightarrow \pi^\pm \pi^0)\nu_\tau$ decays, tagged with $\tau^\pm \rightarrow \ell^\pm \nu_\ell \nu_\tau$ ($\ell = e, \mu$), which provides a high-purity source of pions in data for validation and calibration.

        Speaker: Alexandre Beaubien
      • 109
        Searching for soft unclustered energy patterns with the ATLAS detector

        Soft Unclustered Energy Patterns (SUEPs) are an unconventional signature that can arise in hidden-sector models with approximately conformal dynamics. Unlike traditional new physics signals, SUEP events are dominated by high particle multiplicities, low individual transverse momenta, and nearly isotropic energy flow, making them challenging to identify using standard reconstruction and analysis techniques at the Large Hadron Collider (LHC).

        Previous experimental studies have begun to explore the phenomenology of SUEP-like signatures. In particular, the CMS Collaboration has performed a search targeting SUEP topologies using hadronic final states, demonstrating the feasibility of dedicated reconstruction strategies for diffuse, soft events. Complementary efforts within the ATLAS Collaboration also investigate SUEP scenarios with a focus on muonic final states, highlighting alternative experimental handles on hidden-sector dynamics.

        This talk presents an ongoing analysis to characterize hadronic final state SUEP signatures in proton–proton collisions at √s =13 TeV using the ATLAS detector. A description of ongoing efforts will be presented, which aim to refine experimental tools for identifying SUEP signatures and to broaden the sensitivity of LHC searches to non-traditional manifestations of physics beyond the Standard Model.

        Speaker: William Rettie (Carleton University (CA))
    • (DTP) M3-2 | (DPT)
      • 110
        Compact Objects in regularized 4D Einstein-Gauss-Bonnet Gravity

        Since the derivation of a well-defined $D\rightarrow 4$ limit for 4 dimensional Einstein Gauss-Bonnet (4DEGB) gravity coupled to a scalar field, there has been considerable interest in testing it as an alternative to Einstein's general theory of relativity. Using the Tolman-Oppenheimer-Volkoff (TOV) equations modified for charge and 4DEGB gravity, we model the stellar structure of neutron stars, strongly interacting quark stars, and charged, non-interacting quark stars. We find that increasing the Gauss-Bonnet coupling constant $\alpha$, the strong interaction parameter $\lambda$, or the charge $Q$ all tend to increase the mass-radius profiles of quark stars described by this theory, allowing a given central pressure to support larger quark stars in general. These solutions sets are consistent with recent astrophysical data that has been difficult to describe with standard general relativity (GR) and typical neutron star equations of state. We also discuss the lack of a mass gap in 4DEGB gravity derive a generalization of the Buchdahl bound for charged stars in the theory. In all cases, we find that quark stars can exist below the general relativistic Buchdahl bound (BB) and Schwarzschild radius $R=2M$, due to the lack of a mass gap between black holes and compact stars in the 4DEGB theory. Even for $\alpha$ well within current observational constraints, we find that compact star solutions in this theory can also describe Extreme Compact Charged Objects (ECCOs), objects whose radii are smaller than what is allowed by general relativity.

        Speaker: Michael Gammon (University of Waterloo)
      • 111
        Accelerating Black Holes in 3D Einstein-Gauss-Bonnet Gravity

        We present a class of exact analytical solutions of the Einstein-Gauss-Bonnet gravity in 2+1 dimensions. This solution is obtained by imposing restrictive conditions on the field equations. In particular, the modified gravity metric in Cartesian coordinates equals the C-metric multiplied by a factor which, along with the massless scalar field, depends purely on a single variable. In the zero coupling limit, this general class of solutions not only remains well defined and recovers the C-metric, but also encompasses two classes of previously unknown representations of the AdS spacetime. Furthermore, we establish the existence of a domain wall and inspect the energy conditions along it.

        Speaker: Cendikiawan Suryaatmadja (University of Waterloo)
      • 112
        Thermal Origin of the Attractor-to-General-Relativity in Scalar-Tensor Gravity

        Whether scalar-tensor gravity approaches general relativity or departs from it (e.g, in the early universe) can be described using an analogy with heat dissipation in a viscous fluid. We apply this thermal analogy to cosmology. As a result, we understand why gravity can deviate from general relativity in the early universe and approaches it later in the cosmic history, as originally proposed by Damour and Nordtvedt in the 1990s.
        [Based on V.F. & A. Giusti 2025, Phys. Rev. Lett. 134, 211406]

        Speaker: Valerio Faraoni
      • 113
        Measuring cosmic bulk flow with kinetic Sunyaev-Zel'dovich (kSZ) velocity reconstruction

        The average large-scale velocity of matter in the universe, known as bulk flow, is a fundamental test of the Cosmological Principle. Traditionally, this has been measured only out to $R\lesssim 100$ megaparsecs (Mpc). We present an application of kinetic Sunyaev-Zel'dovich (kSZ) velocity reconstruction to constrain bulk flow on cosmological scales more than an order of magnitude larger, extending out to $R\sim2000\ {\rm Mpc}$. This technique isolates the Doppler shifting of Cosmic Microwave Background (CMB) photons scattered by electrons in galaxies to reconstruct the underlying velocity field.

        We use galaxy data from two surveys (unWISE and WISExSCOS) combined with CMB maps (from Planck) to reconstruct large-scale velocities in six redshift bins ranging $0.1\lesssim z \lesssim 1.5$. We place the tightest upper limits to date on bulk velocity at $500 \lesssim R\,[{\rm Mpc}]\lesssim 2000$, finding results fully consistent with the standard cosmological model, $\Lambda$CDM.

        Furthermore, our constraints are relevant for the "CMB dipole anomaly", a longstanding tension where measurements of galaxy number counts imply a bulk flow significantly larger than the standard theoretical expectation (e.g., from CatWISE). Our constraints are in $\sim 2\sigma$ tension with the leading number-count dipole measurement from CatWISE, challenging their interpretation of the dipole anomaly as an excess coherent bulk flow, and reinforcing the standard cosmological model.

        Speaker: Dr Suroor Seher Gandhi (Perimeter Institute for Theoretical Physics)
      • 114
        Additional Novel Cosmological Models

        At the 2025 CAP meeting, a vacuum (Riemann-flat) exact cosmological solution with the novel feature of being essentially independent of the precise functional form of the cosmological scale factor was presented and discussed. Here, the cases in which the cosmological constant is non-zero and matter in the form of pressureless dust acts as a gravitational source are considered.

        Speaker: Patrick Kelly (University of Mary)
    • (DAMOPC) M3-3 | (DPAMPC)
      • 115
        Reconstructing Femtosecond Laser Pulses with Femtojoule Pulse Energies

        In the past few decades ultrafast laser physics has blossomed into a mature field, as seen by the Nobel Prizes awarded for chirped pulse amplification (2018) and high-harmonic generation (2023). With the realization of light pulses on the order of a few optical cycles comes an increasing demand for suitable pulse measurement techniques. Since the 1990s, Frequency-Resolved Optical Gating (FROG) has gone through numerous iterations and has become the standard method for ultrashort pulse reconstruction.
        To reconstruct pulses with femtojoule-level energies and/or bandwidths approaching or exceeding an octave, cross-correlation FROG (XFROG) variations are particularly useful. A common approach to XFROG, OPA-XFROG (Optical Parametric Amplifier XFROG) records the spectrally resolved output of an optical parametric amplifier versus gate delay and uses the XFROG algorithm to retrieve the intensity and phase of the unknown pulse.
        We introduce KICKING FROG (Kerr-Instability amplification Cross-correlation Induced Nonlinear Generation FROG), which replaces OPA with Kerr-instability amplification to extend XFROG characterization to pulses with broader bandwidths. Compared to OPA-XFROG, KICKING FROG offers greater bandwidth, tunability, and reduced cost, making it well-suited for broadband visible-pulse characterization.
        Kerr-instability amplification (KIA) is a nonlinear optical process extending four-wave mixing to extreme intensities, where two pump photons are destroyed to amplify a seed and create an idler (2ω_p=ω_s+ω_i). Conservation of energy and momentum dictate non-collinear intensity-dependent phase matching. KIA can exceed the bandwidth and tunability of OPA and NOPA systems while enabling the use of any χ^3 (Kerr) medium rather than the rarer χ^2 materials required for OPA. In addition, Kerr polarization rotation enables background-free signal generation, further improving sensitivity for weak-pulse reconstruction down to the femtojoule regime. We will discuss applications of broadband background-free amplification for spectroscopy.

        Fig. 1: Spectrogram generated from KICKING FROG. The seed is generated by supercontinuum generation in sapphire. We scan the pump-seed delay and record the spectrogram of the amplified pulse. We reconstruct the seed and pump pulses using our XFROG algorithm.

        Speaker: Nathan Drouillard
      • 116
        Ultrahigh-bandwidth quantum random-number generation using bright squeezed vacuum

        Synopsis We utilize TIPTOE to directly sample the field of femtosecond bright squeezed vacuum. By resolving the bi-phase nature of the carrier wave, we generate random bit sequences. We perform a full temporal-modal analysis which will allow for random bit generation at PHz rates.

        In the last decade, all-optical techniques have been introduced as prospects for quantum random number generators (QRNGs) [1,2]. Such techniques rely on the bi-phase output of a degenerate optical parametric oscillator. Unfortunately, the generation rate of such schemes is limited by the cavity decay time [1]. Here, we use a double-pass parametric down-conversion (PDC) scheme and field sampling of bright squeezed vacuum (BSV) to bypass the need of a cavity.

        We utilize tunneling ionization with a perturbation for the time-domain observation of an electric field (TIPTOE) [3] to resolve the field of a femtosecond squeezed vacuum beam. We measure changes in two-photon excitation rate in a Si-based CMOS camera induced by the coherent combination of a pump and a BSV field. We perform a full temporal-modal analysis using spectral embedding and find both one-mode and two-mode emission. We extract the carrier-envelope phase (CEP) of BSV shots with a single temporal mode, and find that it exhibits the expected bimodal phase distribution, whereby the phase is locked to that of the pump, apart from a sign. We binarize this phase distribution, assigning bit values 0 or 1 (see Fig. 1b).

        Finally, we run a variety of NIST-compliant tests to certify our generated bit sequences. By assigning one bit per temporal mode, this scheme will allow for PHz-rate QRNG.

        References
        [1] Opt. Express 20, 19322-19330 (2012)
        [2] Science 381, 205-209 (2023)
        [3] Optica 5, 402-408 (2018)

        Speaker: Michael Weil (University of Ottawa)
      • 117
        Understanding Shot Noise Reduction in Optical Parametric Amplifiers

        Amplitude-squeezed light provides an opportunity for reducing the shot noise inherent to classical light sources, in turn improving the signal-to-noise ratio. Such amplitude-squeezed states could allow for the use of Stimulated Raman Scattering (SRS) microscopy to be used for imaging light-sensitive samples. These samples could otherwise be damaged due to the intensity needed for using SRS microscopy with classical light sources. However, practical implementation of using amplitude squeezing for ultrafast laser pulses requires significant development and investigation.

        One method to produce squeezed states is through an optical parametric amplifier (OPA). We have developed a novel numerical technique that accurately models the generation of multimode squeezed states in an OPA, including the change in intensity and the quantum fluctuations of the pump pulse. Our method allows us to simulate the spatial-temporal interactions in three physical dimensions. This permits us to determine the full three-dimensional structure of the squeezed pulse, a vital step for predicting image formation inside of a microscope. It is useful for determining properties of the light which will be used as input for the microscope, and to optimize input parameters for ideal pulse shape and level of squeezing of the quantum light source itself.

        We explore the generation of squeezed coherent states for reducing shot noise for applications in nonlinear optical microscopy, such as mineralogical sampling. In particular, we present practical limitations of shot noise reduction due to the phase sensitivity of OPAs. Results for two-dimensional simulations demonstrate how the level of squeezing and the pulses’ intensities depend on the pulses’ input phases and beam widths, among several variables. We show that the quantum aspects of light, such as the level of squeezing, are much more sensitive to phase than classical variables, such as gain or loss. We further explore these effects in three dimensions.

        Speaker: Mitchel Morrison (University of Ottawa)
      • 118
        Ultrabroadband fluorescence-detected two-dimensional electronic spectroscopy

        Two-dimensional electronic spectroscopy (2DES) has proven to be a powerful tool for studying electronic structure and ultrafast dynamics in a wide range of systems, including natural and artificial light-harvesting complexes, solar cells, molecular systems, quantum dots, and polymers. Fluorescence-detected 2DES (F-2DES) has been shown to provide complementary information to its coherently detected counterpart and easy integration with a microscope for spatially-resolved measurements. To date, the bandwidth of many 2DES and F-2DES methods has been limited by various aspects of their experimental implementation, such as pulse shapers and acousto-optic modulators (AOMs). We will present a rapid-scanning approach to F-2DES that does not rely on such implementations and thus allows broadband implementation. Building on recent work, this approach uses the velocities of the time-delay stages to shift the signals of interest above the 1/f laser noise and interferometrically track the time delays to enable the correction of spectral phase distortions. Isolation of various signal contributions is achieved directly via analysis of the collected time-domain data. Removing the need for AOMs or pulse shapers enables reduced dispersion in the setup, increases the power throughput, and allows one to measure broadband 2D spectra within the fraction of the time compared to established 2D setups. We will demonstrate this method on a laser dye using a broadband continuum light source.

        Speaker: Benjamin den Otter-Versteeg (Department of Physics, University of Ottawa)
      • 119
        Inline squeezing with ultrafast time-bin encoding

        Squeezed states of light are a fundamental resource in photonic-based quantum technologies, enabling the generation and manipulation of non-classical states of light required for universal quantum protocols. A central challenge in realizing robust non-classical states is achieving sufficiently high squeezing while preserving high purity and fidelity. To address this challenge, inline squeezing has been proposed as a compact method for high squeezed states while preserving spectral purity. In this scheme, a Two-Mode Squeezed Vacuum (TMSV) state is first generated by driving a non-linear medium with a pump pulse. The output modes of the first crystal (signal, idler, pump) are arranged in such a way to match the input modes of the second crystal, undergoing parametric amplification in the second non-linear medium. By coherently amplifying an existing squeezed state rather than simply increasing the pump power to achieve the same squeezing parameter, inline squeezing enables a more versatile way of generating high squeezed states.

        Figure 1: Time-bin encoding scheme for inline squeezing where two pump pulses at 775 nm in different orthogonal polarization are sent into two cascaded bulk ppKTP crystals. In the first crystal, time bin t_0 transforms an input vacuum state into TMSV at 1550 nm. In the second crystal, time bin t_1 implements parametric amplification of the previously generated TMSV, effectively increasing the overall squeezing parameter whole preserving purity.

        Here we demonstrate the generation of in-line squeezing using an ultrafast time-bin encoding approach. This is achieved by employing two cascaded Periodically Poled Potassium Titanyl Phosphate (ppKTP) crystals with orthogonal fast axes, enabling sequential nonlinear interaction within a single spatial mode. Two pump pulses with perpendicular polarization are first generated in distinct time bins, where the temporal delay between them is introduced using a birefringent crystal. This delay is chosen to match the group velocity walk off introduced by a single ppKTP crystal. By manipulating the time bins in this manner, both pump pulses temporally overlap upon exiting the first crystal, as shown in Fig. 1, enabling phase-coherent parametric amplification of the TMSV generated in the first crystal within the second crystal. We characterized our inline squeezer using state of the art detectors such as Super Conducting Nanowire Single-Photon Detector and Transition Edge Sensors.

        Speaker: Jonathan Baker (1.National Research Council of Canada, 100 Sussex Drive, Ottawa, Ontario, K1A 0R6, Canada 2.Departement of Physics, University of Ottawa, Advanced Research Complex, 25 Templeton Street, Ottawa ON, K1N 6N5, Canada)
    • (DCMMP) M3-4 | (DPMCM)
    • (DQI) M3-5 | (DIQ)
      • 120
        Strain‑mediated control of hole double quantum dot states using surface acoustic waves in non‑piezoelectric materials

        Strain engineering has long been central to semiconductor physics, traditionally used to modify band structures and improve device performance. Recently, it has emerged as an active quantum control parameter, directly coupling to electronic and spin states. In quantum architectures based on compressively strained germanium-on-silicon (cs GoS) technology, both intrinsic and externally applied strain fields are crucial for lifting valence band degeneracy, separating heavy- and light-hole states, and affecting spin–orbit coupling. These strain-induced effects enable rapid, fully electrical and therefore addressable manipulation of hole spin qubits. Recent advances in coherence and tunability of hole spin states have positioned cs GoS material as a very promising platform for scalable hole-based quantum information processing.
        In this work, we investigate the interaction between surface acoustic waves (SAWs) and holes confined in gate-defined double quantum dots in cs GoS heterostructures. We combine valence-band theory with COMSOL Multiphysics simulations of acoustic propagation. We find that SAW-induced biaxial strain is the dominant mechanism modulating hole energy levels in this non-piezoelectric system, exceeding displacement-mediated electrostatic effects by more than an order of magnitude under realistic conditions. Exploiting sound-velocity contrast within the heterostructure, we identify hybrid Rayleigh–Sezawa modes whose strain fields are naturally localized at the Ge quantum well containing spin qubits, maximizing phonon–qubit coupling without a need of suspended structures or complex phononic crystals. Furthermore, we show that replacing the piezoelectric layer in the SAW delay line with an acoustically matched non-piezoelectric dielectric preserves high quality of acoustic transmission while minimizing charge noise, enabling purely mechanical strain coupling at the qubit site. For lateral double quantum dots placed at opposite strain phases, SAW excitation yields strong differential modulation of interdot detuning, enabling spin-manipulation gates at tens of gigahertz. This demonstrates SAWs as a route for strain control of hole spin qubits in centrosymmetric group-IV semiconductors within CMOS-compatible architectures.

        Speaker: Mr Yousef Karimi Yonjali (Department of Electronics, Carleton University)
      • 121
        Zeeman and Hyperfine Effects in Diamond Magnetometry

        The Diamond NV centre consists of a substitutional nitrogen adjacent to a vacancy, and can exist in both charged (NV-) and uncharged states (NV0). The NV- are spin 1 centers which exhibit a magnetic moment due to the coupling of 2 individual electron magnetic moments. The magnetic field sensitivity of the spin-1 NV- centre results in a number of interesting physical effects when in the presence of various external fields (electric, magnetic, microwave, and light). Because of this, they have become a candidate for robust magnetic field sensors of several designs. Currently, one aspect on the forefront of NV center research is the development of compact optical readout magnetometers.
        Diamond NV- magnetometers can exploit the Zeeman effect within the diamond matrix to produce discernible peaks whose separation is related to the strength of external magnetic fields $\gamma$ in the equation "2$\gamma$B"\cite{ejalonibu2019optimal,davis} in the presence of the NV center. Depending on the orientation of the field, this Zeeman splitting can produce a multitude of different peak separations based on the 4 possible orientations of the NV center in the crystal relative to the externally applied magnetic field. In addition the nuclear spin of the dominant 14N constituent of the nitrogen dopant atoms produces a hyperfine splitting. Presented here are Optically Detected Magnetic Resonance (ODMR) measurements of Zeeman and hyperfine splitting in a N-doped diamond crystal, with some discussion of their implications for magnetometry.

        Speaker: William Davis (University of Saskatchewan)
      • 122
        Finite-Size Effects in Quantum Metrology at Strong Coupling: Microscopic vs Phenomenological Approaches

        We study the ultimate precision limits of a spin chain, strongly coupled to a heat bath, for measuring a general parameter and report the results for specific cases of magnetometry and thermometry. Employing a full polaron transform, we derive the effective Hamiltonian and obtain analytical expressions for the quantum Fisher information (QFI) of equilibrium states in both weak coupling (WC) and strong coupling (SC) regimes for a general parameter, explicitly accounting for finite-size (FS) effects. Furthermore, we utilize Hill's nanothermodynamics to calculate an effective QFI expression at SC. Our results reveal a potential advantage of SC for thermometry at low temperatures and demonstrate enhanced magnetometric precision through control of the anisotropy parameter. Crucially, we show that neglecting FS effects leads to considerable errors in QFI calculations. This work also highlights the inadequacy of phenomenological approaches in describing the metrological capability and thermodynamic behavior of systems at SC.

        Speaker: Ali Pedram (University of Calgary)
      • 123
        Polarization performance of a dual rail warm Rb QMem

        Warm rubidium (Rb) vapor quantum memories are promising platforms for scalable quantum networks due to their technical simplicity and compatibility with room-temperature operation. A key performance metric for these memories is the signal-to-noise ratio (SNR), which directly impacts the fidelity of stored and retrieved quantum states. For practical quantum networking, it is essential that memory performance be independent of the input polarization. In this work, we demonstrate polarization-agnostic operation of a warm Rb quantum memory based on electromagnetically induced transparency (EIT) using a dual-rail architecture.
        The memory stores arbitrary polarization states by coherently mapping orthogonal polarization components of the input probe onto two spatially separated atomic ensembles that experience identical optical and magnetic environments. We perform polarization-resolved measurements of the retrieved signal and dominant noise processes, to assess polarization dependence. By preparing the input probe in a complete set of linear and circular polarization states and analyzing the output in multiple polarization bases, we verify uniform storage efficiency and noise characteristics across all polarizations.
        We will observe both the retrieval efficiency and SNR for all input polarization states, and validate polarization-agnostic behavior for this dual rail system. A classical Bit Error Rate (BER) will be determined from the SNR, providing insight into the utility of our QMem in a quantum network. This is to be done by comparing the BER to the acceptable limits defined in classical communication schemes. Rail recombination and input power imbalances are expected to be the main contributing factors to SNR discrepancies, as SNR will be calibrated based on detector response to input polarization.
        Our work hopes to establish dual-rail warm-vapor quantum memories as robust, polarization-agnostic interfaces for storing polarization-encoded photonic qubits, with direct relevance to quantum repeaters, polarization-multiplexed communication, and room-temperature quantum information processing.

        Speaker: Kenneth Gregory
      • 124
        Exponential onset of scalable entanglement via twist-and-turn dynamics in XY models

        The efficient preparation of scalable multipartite entanglement is crucial for advancing next-generation quantum devices. We explore twist-and-turn (TaT) dynamics in XY models with ferromagnetic, dipolar interactions and a Rabi field. Our study reveals their capacity to achieve scalable spin squeezing at short times and quantum Fisher information with Heisenberg scaling at later times, suggesting generation of scalable multipartite entangled states and potential quantum metrological advantages. We show that TaT dynamics in two-dimensional dipolar systems reproduce results of infinite-range interactions within certain parameter regimes. In particular, scalable entanglement through spin squeezing develops in a time that grows only logarithmically with system size, saturating the maximum speed of entanglement buildup allowed by generalized Lieb-Robinson bounds for power-law interactions. Additionally, our study highlights nontrivial nonthermalizing dynamics at intermediate energy scales. This work holds experimental relevance for systems with power-law interactions, such as Rydberg atoms and trapped ions, which can potentially implement the TaT protocol. Furthermore, it provides foundational insights into the intersection of quantum many-body dynamics and quantum metrology. Reference: arXiv:2507.08206.

        Speaker: Dr Meenu Kumari
    • (DNP) M3-6 Nuclear Structure | Structure nucléaire (DPN)
      • 125
        Precision mass spectrometry at TITAN, TRIUMF

        Mass spectrometry plays a crucial role in numerous fields of physics such as nuclear structure, and neutrino research. Precise mass measurements provide information on nuclear binding and nucleon separation energies, offering insight into shell and subshell closures, and nuclear deformation. The TITAN (TRIUMF's Ion Trap for Atomic and Nuclear science) facility is committed to conducting high-precision and fast mass measurements. Over the past several measurement campaigns, the masses of numerous nuclides located across the nuclear chart have been measured. Most campaigns have utilized a fast, time-of-flight technique using a Multi-Reflection Time-of-flight Mass Spectrometer (MR-TOF-MS). The measured masses have been used to constrain calculations and probe nuclear structure effects. In addition to the MR-TOF-MS, use of the TITAN Penning trap has allowed higher precision in mass measurements to be achieved. A new cryogenic cooling system has been installed at the TITAN Penning trap, to achieve ultra-high vacuum conditions ($\sim$10$^{-11}$ mbar), and facilitate trapping of ions over longer periods of time ($\geq$1 s), as well as measurements of highly charged ions formed in the TITAN Electron Beam Ion Trap. The cryogenic trap was commissioned with a measurement campaign to probe $\beta$ decay in $^{48}$Ca and refine $\beta\beta$ decay calculations. A final upgrade to the trap is currently being implemented, changing from a time-of-flight to a phase-based determination of the cyclotron frequency and therefore the mass. This is expected to push the limit of achievable mass precision to below 10$^{-10}$ level, making it possible to conduct measurements probing fundamental symmetries, tests of the Standard Model and beyond. Results from mass measurements of neutron-rich nuclides using the MR-TOF-MS, and the cryogenic Penning trap commissioning will be presented, along with a summary of the implemented and ongoing upgrades.

        Speaker: Dwaipayan Ray (TRIUMF)
      • 126
        Decay Spectroscopy of 161Eu with the GRIFFIN Spectrometer

        The neutron-rich Gadolinium isotopes around mass A=160 represent a critical region for understanding both nuclear structure and astrophysical nucleosynthesis. These nuclei exhibit large prolate deformations and lie along the freeze-out path of the rapid neutron capture process (r-process). Understanding their structure can give insight to the rare earth abundance peak formation. The odd-mass isotope ${}^{161}Gd$ (N=97) provides a sensitive probe of the single-particle spectrum in this deformed region through its one-neutron excitations experiments relative to the well-studied even-even core of ${}^{160}Gd$. The low-lying single-neutron states in ${}^{161}Gd$ have been probed via the 160Gd(d,p) reaction,and several Nilsson configurations were suggested. The study of the excited states of ${}^{161}Gd$ via the neutron capture reaction provided very limited information on the gamma-ray decays, and the previous study by beta-decay placed only four transitions in the decay scheme. The present work reports new high-precision measurements of ${}^{161}Gd$ excited states following beta-decay of high purity ${}^{161}Gd$ beams produced at the TRIUMF-ISAC facility. To avoid molecular and isobaric contamination, IGLIS(Ion Guide Laser Ion Source) was used along with the ISAC mass separator to greatly improve beam purity with two-step laser ionization selecting Eu isotopes only. Using the GRIFFIN high-efficiency gamma-ray spectrometer augmented with the PACES conversion electron spectrometer and beta-particle tagging with the Zero Degree Scintillator, this work has identified 87 new gamma-ray transitions and constructed a level scheme comprising 35 new excited states. The half-life of ${}^{161}Gd$ has been remeasured with improved precision. An overview of the experiment and analysis will be provided with these results providing stringent tests of the Nilsson model in the midshell region and deliver crucial nuclear structure inputs for r-process calculations.

        Speaker: Jizhong Liu (TRIUMF/UVic)
      • 127
        Search for Shape Coexistence Signatures in 100Ru using Thermal Neutron Capture Reaction.

        At the forefront of nuclear structure research is the topic of shape coexistence, which occurs when states within the same nucleus at similar energies possess distinct shapes. Studies of nuclei in the Zr (Z=40) - Sn (Z=50) region have shown evidence for shape coexistence with deformed rotational-like bands coexisting with spherical or weakly-deformed ground state configurations. In the Ru (Z=44) isotopes, strong evidence has emerged for shape coexistence within 102Ru and 104Ru from Coulomb excitation [1,2], and it was suggested to be present in 98Ru and 100Ru as well [3]. In order to explore shape coexistence in 100Ru, and also probe possible vibrational motion, key mixing ratios and the observation of low-energy, and hence often very weak intensity, transitions between non-yrast states are required. The study of 100Ru presented in this work aims to extract precise transition multipolarity mixing ratios, unobserved weak g-ray transitions, and transition probabilities to resolve its structural nature. We used the thermal neutron capture reaction, 99Ru(n,g)100Ru, carried out at the Institut Laue-Langevin in Grenoble, France. The g-ray transitions depopulating the excited states in 100Ru were detected by the FIPPS array consisting of two sets of eight clover-type hyper-pure Germanium detectors. FIPPS provides high efficiency and the ability for perform detailed gammagamma angular correlations due to its high granularity. Results from the current analysis will be presented with an emphasis on the structural implications of the results.

        Speaker: Sangeet-Pal Pannu (University of Guelph)
    • (DPE) M3-7 Student Competition Session | Session du concours étudiant (DEP)
      • 128
        Introductory Electronics for Scientists: Effects of high-integration approach on critical thinking lab skills and epistemological beliefs in physics undergraduates

        The landscape of student learning in undergraduate physics courses has gone through a gradual shift that has taken place after 2020. This study aims to address this fundamental shift among a group of physics undergraduates taking a second-year introductory electronics course. In changing the modality of learning to revolve around a book written specifically for the course (Introductory Electronics for Scientists), the lecture and laboratory materials become closely coupled. The impact of this high-integration approach is assessed using survey data and interviews on students' laboratory skills, epistemological beliefs, and learning habits tracked over the duration of the course. Results and outcomes from the Winter 2025-2026 cohort will be discussed, with implications for future practice.

        Speaker: Sophie Della Manna (Brock University)
      • 129
        Assessing the Urban-Rural Gap in Ontario High School Enrolment Choices

        Gendered patterns persist within specific STEM subjects in Ontario high schools, with physics remaining male-dominated and biology increasingly female-dominated. [1] Beyond gender, school location also impacts students’ opportunities in STEM. Rural schools are often smaller and less connected to nearby institutions, which can make staffing specialized courses challenging, which in turn limits students’ choices in senior STEM subjects. [2,3]

        This study investigates the underlying factors that contribute to urban-rural differences in students’ post-secondary readiness. It uses descriptive analysis of Ontario enrolment data (2006–2023), focusing on which senior STEM courses are offered, how frequently they are available, and the marks students achieve in those courses. In doing so, it will provide insight into both gendered gaps in enrolment across STEM fields, as previously demonstrated by physics and biology [1], and the ways school location intersects with gender to influence achievement and access in senior STEM courses across Ontario.

        [1] Corrigan, E., Williams, M. & Wells, M. High School Enrolment Choices- Understanding the STEM Gender Gap, (2023). Can. J. Sci. Math. Techn. Educ. 23:403–421 https://doi.org/10.1007/s42330-023-00285-y
        [2] Looker, D.E. Regional Differences in Canadian Rural-Urban Participation Rates in Post-Secondary Education, (2019). A MESA Project Research Paper. Toronto, ON: Educational Policy Institute.
        [3] Nielsen, W. Accessing senior science courses in rural BC: A cultural border crossing metaphor, (2004). Paper presented at the annual meeting of the Canadian Society for Studies. Winnipeg, May 2004. https://doi.org/10.11575/ajer.v53i2.55261

        Speaker: Michaela Hishon (University of Guelph)
      • 130
        Comparison of the experiences of Female and Male Students in taking Labatorials

        It has been shown “that female students with A’s have similar physics self-efficacy as male students with C’s in introductory courses. Also female students are less likely to see themselves as a physics person than male students” (Marshman, Yasemin Kalender, Nokes-Malach, Schunn, Singh, 2018)). See also Yangqiuting Li, Kyle Whitcomb, and Chandralekha Singh (2020) and Hazari, Tai & Sadler (2007).
        Labatorials (combination of “lab” and “tutorial”) developed at the University of Calgary were inspired by the introductory physics tutorial system entitled ‘Tutorials in Introductory Physics’ at the University of Washington. Students doing Labatorials typically work in groups of four to five using structured worksheets that target prior understanding and emphasize conceptual reasoning alongside hands-on experimentation. The worksheets guide students through predictions, calculations, graphing, and experimental tasks, with a stronger emphasis on experimental engagement than traditional labs.
        We present results from a study comparing the experiences of female and male students undertaking labatorials. Using a mixed-methods approach, the study collects qualitative and quantitative data, including pre-tests, post-tests, reflective writing assignments, interviews and surveys. The research was conducted at Concordia University and Mount Royal University with student groups composed of mixed genders.
        Interviewed students perceived that women more frequently engaged socially to support learning, while men more often relied on autonomy. Women also played a larger role in maintaining social cohesion within lab groups. A strong belief in a gendered division of labor was reported, with men typically performing hands-on experimental tasks and women assuming analytical or note-taking roles. Initially, female students reported lower confidence in handling laboratory apparatus; however, by the end of the semester, they were more actively engaged in working with the experimental setup.
        Female students gained confidence and reported feeling capable of working with experimental equipment and contributing more equally alongside male students.

        Speaker: Ms Lydie Lachance Djilo Kamdem (Concordia University)
      • 131
        An Introductory Experiment in Second Harmonic Generation

        This project is part of a comprehensive revision of the Undergraduate Physics Laboratory Curriculum at the University of Waterloo, supported by the Dean's Undergraduate Teaching Initiative, Waterloo Science Endowment Fund (WatSEF), and the Sinclair Foundation.

        I have designed an introductory-level experiment on Second Harmonic Generation (SHG). SHG is a nonlinear optical process in which photons interact to produce light at twice their original frequency. The experiment is taught using inquiry-based instruction. Students investigate whether SHG depends on pulse energy, peak power, or laser intensity using a Titanium Sapphire femtosecond laser (850±50nm, 100±50fs pulses) and a Beta Barium Borate (BBO) crystal. The experiment will be implemented in the new Gee-Whiz Lab Course (GWLC), where students will conduct contemporary physics experiments without requiring prior subject mastery. This approach encourages students to revisit these beyond-introductory-level topics throughout their undergraduate education and explore how experimental investigations contribute to progress in physics in ways distinct from theoretical approaches. Preliminary work suggests the experiment is both technically feasible and pedagogically effective, providing a foundation for future introductory-level curriculum development in nonlinear optics and other topics typically reserved for upper-year or graduate study. Previous presentations by this research group included preliminary work that was foundational to the project, this presentation summarizes the first completed experiment for the new lab course and outlines our next steps.

        Speaker: Urja Nandivada
    • (DPMB) M3-8 | (DPMB)
      • 132
        Modelling Metamorphic Protein Kinetics in Crowded Cytosolic Environments

        Cellular interiors are densely crowded with proteins and other macromolecules, leading to excluded-volume and weak enthalpic effects that can strongly modulate protein folding, binding, and aggregation. To study these phenomena, we are developing a combined computational-experimental framework to quantify crowding effects using a well-defined “model cytosol” and a sequence-aware coarse-grained (CG) molecular dynamics (MD) model.

        On the computational side, we are adapting an in-house Cα dual-basin CG model to simulate multi-protein mixtures with explicit inter-chain interactions. Two new sequence-guided terms have been implemented: (i) a short-range effective hydrophobic attraction and (ii) a Debye-screened electrostatic potential with pH-dependent residue charges. In parallel, we are building a six-protein experimental mixture (2:2:3:2:1:1 of horse myoglobin, bovine β-lactoglobulin, chicken albumin, human hemoglobin, horse hemoglobin, and bovine serum albumin) and using resultant data from dynamic light scattering experiments to calibrate model parameters.

        This model will be used to investigate the fold-switching dynamics of human chemokine XCL1 (lymphotactin), a metamorphic protein involved in the immune response that transitions between two distinct native folds depending on local conditions. We aim to quantify how cytosol-like crowding and weak enthalpic interactions modulate XCL1 fold-switching kinetics using CG MD simulations, with key predictions testable by NMR. More broadly, the framework offers a sequence-informed way to predict nonspecific association and early aggregation risk in crowded environments, with implications for therapeutic developability and formulation.

        Speaker: Francis Dominie
      • 133
        Coarse- Grained Langevin Dynamics Model for Polyethylene-glycol

        The interior of cells is a highly dense space in which macromolecules occupy a substantial fraction of the volume. Crowding effects have been shown to impact the conformations and diffusion of proteins, including polymer-like intrinsically disordered proteins. Here we study macromolecular crowding effects in a polymer system consisting of polyethylene glycol (PEG) and Ficoll, which acts as a crowder molecule. The goal of this research is to understand the chain compaction and reduced diffusion of PEG chains in the presence of a high volume fraction of crowders.

        PEG is modeled as a bead-spring chain with additional angular and torsional interactions, as well as non-bonded Lennard-Jones interactions. Conformational sampling of the system is carried out using Langevin dynamics. The model is first parametrized with experimental data by tuning the nonbonded interaction strength to reproduce experimentally measured scaling of the radius of gyration, $R_g$, with the number of monomer units, $N$. Crowders are modeled as spheres of radius $R_c$, and their effect on $R_g$ is in terms of the ratio, $\lambda=R_g/R_c$, and further compared with their concentration, denoted by packing fraction $\phi_c$.

        Simulation results reproduce key experimental trends: polymer dimensions are weakly affected when polymers and crowders are comparable in size ($\lambda\sim1$), while significant compression occurs for $\lambda>1$. By isolating individual parameters that are inaccessible in experiment, these simulations provide insight into how excluded-volume effects and crowding geometry contribute to polymer behavior in experiments. This work demonstrates the utility of CGMD simulations for bridging experiment and theory in crowded polymer systems and for improving coarse-grained models of macromolecular behavior in complex environments.

        Speaker: Jorja Pevie (Memorial University of Newfoundland)
      • 134
        COMPUTER SIMULATIONS OF THE FOLDING OF METAMORPHIC PROTEINS B4, SB3 and SB4

        Metamorphic proteins are an unusual class of proteins capable of adopting more than one stable native fold, challenging the classical assumption that a single amino acid sequence encodes a single structure. In this work, we investigate the folding behaviour of three closely related proteins: B4, SB3, and the engineered mutant SB4. Despite differing by only a small number of residues, these proteins populate distinct structural states experimentally, with SB4 in particular exhibiting the ability to switch between a B-like fold and an S-like fold.
        To probe the mechanisms underlying this fold-switching behaviour, we employ a coarse-grained Cα structure-based model combined with Langevin dynamics. In this framework, each amino acid is represented by a single bead, and native interactions are encoded through structure-based contact potentials, enabling efficient exploration of protein folding energy landscapes. By varying temperature and the relative strengths of native contacts associated with the B- and S-folds, we simulate the folding of B4, SB3, and SB4. The resulting trajectories are analysed using RMSD-based order parameters, native-contact fractions, and free-energy surface projections.
        The simulations reproduce the expected two-state folding behaviour of B4, reveal a more complex energy landscape for SB3 consistent with the presence of an intermediate state, and demonstrate that SB4 can populate both the B- and S-folds depending on model parameters. These results show that simplified coarse-grained models can capture key features of metamorphic protein folding and provide insight into how minimal sequence changes reshape the underlying energy landscape. The findings complement recent experimental studies and contribute to a deeper understanding of the physical basis of protein fold switching.

        Speaker: Ken Tse Pen Ki
      • 135
        Varieties of the Schelling Model: Foundations for Exploring Positional Choice Models

        Simulating simplified models of social interactions with agent-based models (ABMs) fills a special role in the study of social phenomena, where it is often impossible to design controlled experiments. Schelling’s model of segregation is one of the best-known ABMs, notable for being among the first and simplest to demonstrate how societal outcomes can collectively fail to match individual preferences. This link between microscopic rules and macroscopic social behavior, has established it as a valuable bridge between the physical, computational, and social sciences. This has yielded a proliferation of model variants and a disjointed state of the literature. In this work, a comprehensive analysis of Schelling model rule variants is achieved by classification of the space of macroscopic outcomes via phase diagrams. Among 54 rule variants, only 3 phase diagram classes are found, characterized by the number of phase transitions. The statistical and dynamic drivers of these transitions are elucidated by analyzing the roles of agent vision, movement criteria, vacancies, the initial state, and rivalry. This comprehensive classification gives new insight into the drivers of phase transitions in the Schelling model and creates a basis for studying model extensions. We report progress on how sloped, curved, and peaked satisfaction functions, along with a stochastic move rule, affect the system’s phase space.

        Speaker: Marlyn Mwita (University of Waterloo)
      • 136
        Synergistic Effects of H-NS and Crowders on Bacterial Chromosome Organization

        Cells organize their chromosomes through the coordinated action of chromosome-associated proteins and surrounding molecular crowders. In crowded environments, polymeric chains such as chromosomes can undergo entropically driven condensation. Using a coarse-grained computational model, we examine the physical roles of the cross-linking protein H-NS and crowders in bacterial chromosome organization. Our results reveal a two-way synergy: crowding enhances H-NS binding, while H-NS amplifies crowding-induced compaction. We further find that increased chain stiffness strengthens this synergistic effect.

        Speaker: Bae-Yeun Ha
    • (PPD) M3-9 | (PPD)
      • 137
        Neutrino Self-Interactions in Neutrino Scattering

        Testing new interactions in the neutrino sector, both in current and upcoming experiments, is essential for uncovering the nature of neutrinos. In many extensions of the Standard Model, active neutrinos may engage in self-interactions via the exchange of new light mediators, often motivated by the need to explain empirical puzzles such as the origin of neutrino mass. Cosmological data also point toward an effective Fermi constant significantly larger than what the Standard Model offers. In this talk, I will show how neutrinophilic mediators can leave indirect yet measurable signatures through radiative corrections to neutrino-matter scattering. I will also discuss the resulting new contributions to the Z-boson decay width and to non-standard neutrino interactions relevant for neutrino oscillation experiments.

        Speaker: Saeid Foroughi-Abari (Carleton University)
      • 138
        Measurement of Inclusive Antineutrino Cross-Sections in 3.5 GeV and 6 GeV Antineutrino Beams and Neutrino Counterparts in Neutrino Beams

        The next generation of long-baseline neutrino experiments requires a high precision understanding of neutrino-nucleus interactions. The MINERvA experiment at Fermilab was designed to provide this increased understanding with accurate measurements of these interactions that then guide the development of robust interaction models. In particular, cross-section measurements play a central role in advancing these models used in the measurement of oscillation parameters. A more powerful approach than measuring the cross-section in one beam is to extract the same cross-section measurement in two different beams incident on the same detector. This allows for two measurements of the same cross-section using identical detector technology, simulation, and measurement extraction procedures, while the flux of the incoming particles differs depending on the incident beam. This provides a unique opportunity to assess the impact that the changing flux has on the measurement, serving as an additional benchmark for tuning cross-section models.

        This work presents a measurement of the inclusive antineutrino (and neutrino) cross-sections, in terms of muon kinematics, on the hydrocarbon tracker region of the MINERvA detector with two different beams, one peaked at 3GeV and one peaked at 6GeV. This inclusive measurement as a function of muon transverse and longitudinal momenta will be presented along with comparisons to different oscillation experiment model predictions.

        Speaker: Maria Mehmood
      • 139
        The Development of Directional Calibration in P-ONE

        P-ONE is a high-volume neutrino telescope planned to span a cubic kilometre in the Pacific Ocean. It will consist of an array of optical modules (OMs) that house photomultiplier tubes (PMTs) to detect TeV-energy neutrinos via Cherenkov radiation. The timing and position of the light detected allow us to reconstruct the path and direction of the incoming neutrinos. While this information is crucial to detect sources of astrophysical neutrinos, the current calibration methods used for neutrino telescopes are limited because it is not feasible to produce particles (or muons) at the relevant energies to understand the detector's response. P-ONE features a unique calibration system, the Muon In-Situ Tracker (MIST), which aims to resolve this issue. It consists of panels of plastic scintillators that are housed within each P-ONE OM, which can directly tag muons that pass through them. The known muon path can be used to estimate the angular resolution of the PMTs' reconstruction. Here, I discuss results from Geant4 simulations of the system and prototype testing to maximize the muon detection efficiency of MIST.

        Speaker: Tyler Martin (University of Alberta)
      • 140
        Optical Calibration of the SNO+ Scintillator Phase using a ‘Laserball’

        The SNO+ experiment is a kilotonne-scale neutrino detector located 2km underground at SNOLAB in Sudbury, Ontario. A primary goal of SNO+ is to further our understanding of the Standard Model (SM) and the nature of neutrinos through a search for neutrinoless double beta decay (0$\nu$$\beta$$\beta$) in $^{130}$Te. A 0$\nu$$\beta$$\beta$ detection would measure the absolute neutrino mass, extend physics beyond the SM, and offer insights into the unexplained neutrino mass mechanism. The long decay half-life ($>10^{25}$yrs) and backgrounds near the decay Q-value ($\sim2.5$MeV) pose great challenges in detecting 0$\nu$$\beta$$\beta$. A key component of effective background modeling is the reconstruction of the energy and position of physics events within the detector. To accomplish this, a calibration campaign was designed to measure the detector optical response. \
        The detector consists of a 12m diameter acrylic vessel currently filled with 780 tonnes of liquid scintillator, linear alkylbenzene, doped with the fluor 2,5-diphenyloxazole, viewed by approximately 9400 photomultiplier tubes (PMTs) and surrounded by a shielding volume of ultra-pure water. In SNO+, a photon from a physics event is subjected to optical processes on its trajectory from the interaction vertex to the PMTs. These optical processes include scattering, absorption, and re-emission from the scintillator, acrylic, and external water; and reflection and refraction at media boundaries. Both the optical processes and PMT response are position, energy and wavelength dependent. An ideal optical calibration source, the Laserball, was developed, which produces quasi-isotropic light at well-defined wavelengths, and can be deployed throughout the detector. This talk presents the optical calibration from the first deployment of the SNO+ Laserball in the scintillator phase.

        Speaker: Jamie Grove (Queen's University)
      • 141
        Measurement of the muon flux at SNOLAB using the DEAP-3600 experiment

        The DEAP-3600 experiment is a direct dark matter (DM) search located 2 km underground at SNOLAB in Sudbury, Canada, employing a spherical acrylic vessel capable of holding 3600 kg of liquid Argon (LAr) target surrounded by a water Cherenkov veto system. The inner vessel and the water tank are monitored by photomultiplier tubes (PMTs) to detect scintillation and Cherenkov light. Despite the large rock overburden, high energy cosmic ray muons produced in the atmosphere penetrate the detector depth and constitute an important background through direct detection and the production of cosmogenic neutrons in the surrounding rocks, which can mimic dark matter signals. In this work, we study the flux of cosmic ray muons at the DEAP-3600 site using events in the water tank along with the muons which are in coincidence with the water tank and inner LAr detector. The water tank PMTs detect the Cherenkov light produced due to through-going muons, whereas coincident muon signals are used in background discrimination for the dark matter search. The precise study of muon-induced backgrounds is essential in understanding and rejecting backgrounds in the DEAP-3600 dark matter search. These results provide an important benchmark for the future rare-event search experiments at the SNOLAB facility.

        Speaker: Akhil Maru (Carleton University)
    • 17:30
      Travel Time | Déplacement
    • Welcome BBQ (indoors) | BBQ de bienvenue (à l'intérieur)
    • M-HERZ Herzberg Memorial Public Lecture | Conférence publique commémorative Herzberg
      • 20:00
        Welcome and Introduction
      • 21:00
        Q&A Session and Thank You's
    • 07:30
      Congress Registration and Information (07h30-17h00) | Inscription au Congrès et information (07h30-17h00)
    • 08:45
      Plenary hall opens | Ouverture de la salle plénière Rm 1150 (cap. 505) (Health Sciences Bldg., U.Sask.)

      Rm 1150 (cap. 505)

      Health Sciences Bldg., U.Sask.

    • T-PLEN1 Plenary Session | Session plénière - Normand Mousseau, U. de Montréal
      • 142
        What can physicists contribute to the climate challenge?

        Global warming is a central issue for our civilization. While it has broad impact, it is at its core simply a matter of energy balance. Given its fundamental nature, global warming has attracted the attention of many physicists over the years. This includes climatologists, of course, many of whom hold a physics degree, and whose main work focuses on describing and understanding the impact of greenhouse gases on the evolution of the climate. It also includes physicists interested in developing critical fundamental and applied solutions to move away from a fossil-based society as well as those interested in planning the transformation of our energy system or in.

        Yet, even though energy transition involves a lot of technology and science concepts, to contribute to such a transversal transformation, physicists must expand their traditional training: unlike physical systems, social transformations are not deterministic, requiring scenario-based thinking largely foreign to physics training.

        Building on my own experience working on the energy transition, I'll present examples of how physicists do contribute to this challenge by building on the fundamental scientific knowledge and problem-solving tools that are part of our physics training and how we can use those skills for her society-wide problems.

        Speaker: Normand Mousseau (Université de Montréal)
    • 09:45
      Health Break | Pause santé
    • (DAPI) T1-1 | (DPAI)
      • 143
        The SNOLAB Ultra-Low Background Material Screening Program

        The SNOLAB laboratory is located deep underground in the Canadian Shield and hosts several science experiments which require extremely low levels of background radiation. The deep underground facilities provide significant rock overburden and thus a reduction in the cosmic ray flux and cosmic ray-spallation induced products, such as neutrons. Nevertheless, even when an experiment is deep underground, there are still backgrounds present at levels which can hinder experimental searches for neutrino interactions or the search for dark matter. These backgrounds can include high-energy cosmic ray muons which pass through the rock overburden that then interact with the experiment or rock nearby the experiment, and the detector environment itself, which can include the radioactivity naturally emitted from the surrounding rock and the materials used to construct the experiment. Since many of these backgrounds may be present in the underground environment and the experimental materials themselves, it is highly desirable to measure these backgrounds and to determine the effort required to reduce them further to meet the desired scientific goals of the experiments. This presentation will describe SNOLAB's low-background material screening facilities and background measurement capabilities which can be used to directly measure these radioactive backgrounds and to search for new low-background materials which can be used for future detector fabrication.

        Speaker: Dr Ian Lawson (SNOLAB)
      • 144
        Utilization of Quantum Dots for Radiation Detection Applications

        Quantum dots exhibit size-dependent characteristics because of quantum confinement at the nanoscale. This leads to the possibility of controlling their electronic and optical properties, including the band gap. While these tunable properties of quantum dots have been exploited in photonic applications, their use in radiation detection remains comparatively unexplored. When illuminated by ionizing radiation, quantum dots can be excited to fluoresce at an emission wavelength determined by the band gap of the material, offering a potential route toward radiation detector concepts tailored to specific measurement needs.
        In this work, we use Stopping and Range of Ions in Matter (SRIM) and Geant4 simulations to examine the applications of quantum dots as functional materials in radiation detection systems, with a focus on how material selection, particle size, and placement within a detector impact energy deposition and light production. Initial simulation results indicate that source choice and detector geometry play a key role in determining the effectiveness of quantum dots as functional components in radiation detection systems. In this presentation, we provide highlights of our results on design considerations and parameter regimes that may enable improved performance in radiation detectors incorporating quantum dots.

        Speaker: Dr Jeremy Dion (Canadian Nuclear Laboratories)
      • 145
        Development of a Large Vacuum Chamber for Silicon Photomultipler Testing at Liquid Xenon Temperatures

        As modern physics experiments begin to require lower and lower backgrounds to reach their sensitivity goals, be it dark matter searches or experiments seeking to probe the nature of neutrinos, new technology is needed in order to reach these stringent requirements. In the realm of light detection devices, silicon photomultipliers (SiPMs) offer an upper-hand in terms of radiopurity when compared to photomultipler tubes, which would otherwise be used. The nEXO experiment, specifcally, intends to line a monolithic time-projection chamber containing 5 tonnes of xenon enriched in the isotope Xe-136 with thousands of SiPMs to detect the scintillation light from said isotope in the search for neutrinoless double-beta decay. For this, a large-scale testing infrastructure is under development at McGill to allow for mass-testing of SiPM staves, which consist of twenty square-meter-large panels grouping together approximately two thousand SiPMs before their eventual installation in the nEXO detector. The setup consists of an aluminum chamber with approximately half a cubic meter in volume under high vacuum, capable of reaching pressures down to 1 × $10^{−7}$ mbar. Inside the chamber, an XY gantry and linear rail system is used to guide a laser whose optical characteristics mimic those of xenon scintillation light onto the SiPMs, allowing for automated testing of all SiPMs on a stave within a vacuum cycle. A cryocooler is used to cool the SiPMs to 165 K, the temperature at which xenon remains liquid when at pressures expected for nEXO. This talk would outline the mechanical and electrical engineering challenges involved with instrumenting this detector with the proper sensors and peripherals to enable its use for the outlined research goals.

        Speaker: Simon Lavoie (McGill University, (CA))
      • 146
        The ARGO detector readout and DAQ

        Physicists continue to invest significant effort in the search for dark matter using increasingly large and sensitive detectors. ARGO is a next generation experiment in conceptual development designed to push sensitivity through advanced photodetection and large-scale instrumentation. The detection medium is a ~400-tonne mass of low-background argon inside an acrylic vessel. To capture the scintillation light, the 200 m$^2$ outer surface will be covered in Single Photon Avalanche Diodes (SPAD) with a digital readout. The high potential granularity (mm-scale) of SPAD arrays results in channels numbering in the millions, requiring a new approach for the readout and data acquisition. We will present a data acquisition architecture exploiting distributed real-time artificial intelligence to identify signals of interest and extract the relevant properties, such as position, energy and probable particle type.

        Speaker: Prof. Audrey Corbeil Therrien (Université de Sherbrooke)
      • 147
        Monte Carlo modeling of pixelated digital SiPMs for the ARGO dark matter experiment

        ARGO is a future dark matter direct-detection experiment based on a liquid argon (LAr) target proposed to be constructed at SNOLAB in the next decade. ARGO will produce leading sensitivity to heavy dark matter searches above 50 GeV/c2. It will also have excellent sensitivity to detect core-collapse supernova neutrinos and produce high-precision measurements of solar neutrinos at and above the 7Be shoulder. For photodetection in ARGO, we are interested in pixelated silicon photomultiplier (SiPM) photosensors with fast photon timing that will allow good position reconstruction and novel hit-pattern-based event discrimination. However, the dark noise and optical crosstalk (oCT) associated with SiPMs can potentially affect the electron recoil/nuclear recoil pulse-shape discrimination and distort spatial/temporal photon hit patterns, limiting background rejection. We have developed a full Monte Carlo simulation of a pixelated SiPM system, including dark noise and oCT, to evaluate detector performance and determine constraints on SiPMs to achieve that performance. In my talk, I will describe our MC model and present some results about the impact of these SiPM noises on the detector energy threshold and event position reconstruction.

        Speaker: Dr Asish Moharana (Carleton University)
    • (DTP) T1-10 | (DPT)
      • 148
        Gravitational-wave observations as astrophysical probes

        In the ten years since the first gravitational-wave (GW) detection, the LIGO, Virgo, and KAGRA detectors have amassed around 200 compact binary mergers and have revolutionized our understanding of physics. In this talk, I will describe the demographics of these objects—their mass, spin, redshift, etc., distributions—and their use as astrophysical probes. I will emphasize a hallmark feature of GW astronomy that is crucial to this exercise: our ability to model selection effects (the GW analogue of the Malmquist bias) accurately and precisely. I will also summarize our current understanding of formation pathways and the open questions that remain, with a concentration on the emerging hints of repeated mergers in the binary black hole population. I will end with my view on the exciting prospects ahead for gravitational-wave science.

        Speaker: Aditya Vijaykumar (CITA)
      • 149
        Primordial Black Holes: Gravitational-Wave Signatures

        In this talk, I will review two gravitational wave (GW) signatures of primordial black holes (PBHs) as candidates for dark matter. I investigate the stochastic gravitational wave background (SGWB) generated by PBHs in the dense cores of dwarf galaxies (DGs), considering both hierarchical binary black hole (BBH) mergers and close hyperbolic encounters (CHEs). We incorporate up to four successive generations of PBHs within a Hubble time and quantify the GW emission from both channels. The results show that while BBHs dominate the total emission, CHEs occur earlier, provide the first GW signals, and contribute a continuous though subdominant background that becomes relatively more significant once the initial PBH population is depleted and binary formation is suppressed. The resulting SGWB spectra demonstrate that BBHs and CHEs imprint distinct frequency dependencies consistent with analytical expectations. Finally, I compare the predicted signals with the sensitivity of observatories such as LISA, DECIGO, ET, IPTA, and SKA.

        Speaker: Encieh Erfani (Perimeter Institute for Theoretical Physics)
      • 150
        Gravitational Wave Emission and the Influence of External Perturbations on Compact Binary System

        Predicted by general relativity, gravitational waves (GW) provide direct, observable signatures of strongly gravitating systems operating in extreme and often counterintuitive regimes of spacetime, including binary systems and black holes. Binary systems provide a unique means to probe how general relativity encodes spacetime curvature into observable GW signals. While the intrinsic emission of GWs from binaries is well understood, this thesis investigates a fundamental open question that extends beyond textbook GW physics: can external spacetime disturbances influence the evolution of a binary system, so much so that the GW may modify, quench, or even reverse the inspiral driven by the system’s own radiation?

        The intrinsic orbital and radiative parameters of an equal-mass circular binary and their evolution are examined by deriving the GW spectrum within the quadrupole approximation of linearized general relativity. This includes the power spectrum P(ν)∝ν^(10/3), the energy spectrum ∣dE/dν∣∝ν^(-1/3), and the strain spectrum h(ν)∝ν^(2/3), along with their associated orbital decay (a ̇∝a^(-3)) and frequency evolution (ν ̇∝ν^(11/3)). These results are then benchmarked against observational data from the Hulse–Taylor binary pulsar to confirm their validity.

        The response of a binary system to an incident gravitational wave is investigated using the geodesic deviation equation to model GW-induced tidal accelerations. The resulting energy transfer to the orbit is estimated by modeling the binary as a damped, driven harmonic oscillator, allowing for a frequency-dependent treatment of the absorbed power, considering both resonant and non-resonant driving frequencies. Under realistic conditions, external gravitational waves produce negligible orbital modifications compared to the binary’s intrinsic GW-driven inspiral.

        Speaker: Emily Spence (Carleton University)
      • 151
        Conversion between Gravitational and Electromagnetic Waves at Medium Boundaries

        It is well known that in the bulk of a background electromagnetic field, photons and gravitons convert into one another. This phenomenon is known as the Gertsenshtein effect. In my research, use this result to examine what happens at a boundary separating a vacuum region from a background electromagnetic field, and I find that up to 34 % of an incoming gravitational wave’s energy can get reflected and converted into electromagnetic waves, and vice versa.

        Speaker: Serge Hamoudou (PhD Student)
      • 152
        Near zone dynamics and relativistic correction to the retarded gravitational field

        We focus on the motion of a detector caused by a near-zone, time dependent gravitational source, taking into account general relativistic effects. We find that the corrections due to the acceleration of the source can be large enough so that they could potentially be measured. We then look at tectonic plate motion during earthquakes as a viable source that could allow for the prompt gravitational detection of earthquakes via these relativistic effects.

        Speaker: Thomas Forget (Université de Montréal)
      • 153
        Evidence for Event Horizons in AGNs from Ruling Out Thermal Surface Emission

        Active Galactic Nuclei (AGNs) are compact regions powered by the accretion of matter onto the supermassive black holes (SMBHs) at the centers of galaxies. In general relativity, the black hole singularities are hidden behind event horizons, beyond which light cannot escape. However, motivated by the information paradox, a variety of other solutions have been posited that do not have an event horizon, ranging from black hole foils (e.g., gravastars, fuzzballs, etc.) to naked singularities. In this work, we compare the spectral energy distributions (SEDs) of 106 AGNs obtained from the BAT AGN Spectroscopic Survey (BASS) to those predicted by accretion-powered thermal surface emission models that are typical of black hole alternatives. In no source do we detect the excess surface emission, and we find that in 38 AGN, upon assuming that the emission is compact, the observed SED is inconsistent with the presence of excess surface emission. Therefore strongly suggests the presence of an event horizon in those cases. For most of the unconstrained AGN, the constraint is dominated by uncertainty in the source properties (e.g., black hole mass, distance, luminosity, etc.) or variability in the SED from the accretion flow itself, and thus these could benefit from future multiwavelength observations.

        Speaker: Ekin Oran
    • (PPD) T1-11 | (PPD)
      • 154
        Probing Long-Lived Particles at the HL-LHC with the MATHUSLA Experiment

        Beyond the Standard Model Long-Lived Particles (LLPs) appear in many theoretical frameworks that address fundamental questions such as the hierarchy problem, dark matter, neutrino masses, and the baryon asymmetry of the universe. The LHC may in fact be producing copious numbers of neutral LLPs with masses above a GeV, only to have these sneaky particles escape the main detectors without being spotted. To fill this gap, we have proposed the MATHUSLA detector (MAssive Timing Hodoscope for Ultra-Stable neutraL pArticles), which would be constructed on the surface above CMS and would take data during High-Luminosity LHC operations. The detector would be composed of several layers of solid plastic scintillator, with wavelength-shifting fibers connected to silicon photomultipliers, monitoring an empty air-filled decay volume. The Conceptual Design Report (CDR) published last year sets out a benchmark geometry of 40m x 40m x 25m, with a modular detector construction scheme that would allow data collection to begin as soon as the first of 16 modules is installed. This talk will summarize the results of more detailed studies conducted by the Canadian MATHUSLA team since the CDR, with the aim of building the first 4 modules (20m x 20m x 25m) in Canadian facilities. These studies include higher-statistics simulations of rare Standard Model backgrounds, FPGA implementation of trigger algorithms that would permit an “LLP trigger” to be sent to CMS, and "test stands" at the University of Victoria and the University of Toronto.

        Speaker: Miriam Diamond
      • 155
        Switched At Birth: Born Lepton Assignment of Semileptonic $t\bar{t}H$ Events

        The Higgs decay to pairs of muons is extremely rare, yet it still provides the best opportunity to measure Higgs couplings to the second generation fermions. A recent search using the ATLAS detector on the Large Hadron Collider has established evidence for this decay at the level of $3.4~\sigma$. In this search, events are categorized based on the production mode of the Higgs boson. One such category is the associated production of the Higgs with two top quarks ($t\bar{t}H$), which is one of the easiest to separate from background processes. This is due to the decay of the two top quarks, each of which provides additional information in the event. However, this additional information can sometimes cause ambiguity in the search, specifically when one of the top quark decays includes a muon. In these cases, it is not clear which two muons come from the Higgs, and which is from the top decay. The default strategy for selecting the Higgs candidate muons is inefficient in these cases, only recovering the correct muons 60% of the time. This talk investigates strategies for improving the selection of Higgs candidate muons in semileptonic $t\bar{t}H$ events which successfully recover the correct muons more than 90% of the time.

        Speaker: Ian Alejandro Ramirez-Berend (Carleton University (CA))
      • 156
        Shedding light on the dark sector with DarkLight

        The nature of dark matter remains one of the largest open questions in particle physics. Despite numerous theories and experimental searches, both small-scale and large-scale, it has so far remained unobserved at the particle level. The DarkLight experiment located at TRIUMF in Vancouver, Canada aims to leverage the ARIEL electron linear accelerator to search for a new dark sector force carrier in the 10-20 MeV mass range. Such a force carrier could undergo kinetic mixing with Standard Model photon, and thus could provide possible explanations for experimental anomalies such as the X17. As the DarkLight experiment has recently been installed and commissioned, this talk will primarily focus on progress up to this point and future plans.

        Speaker: Laura Miller (TRIUMF)
      • 157
        Measurement of associated production of Higgs bosons decaying to pairs of W bosons with the ATLAS detector at the Large Hadron Collider

        Measurements of Higgs boson production in association with a vector boson can provide direct access to the Higgs boson's couplings to vector bosons (given knowledge of the other branching fractions of the Higgs), providing stringent tests of the Standard Model of Particle Physics. In this talk, I will focus on associated production of Higgs bosons (VH) decaying to pairs of W bosons (H -> WW*). In the Large Hadron Collider's Run 2, VHWW was measured using 139 fb⁻¹ of proton-proton collision data collected by the ATLAS detector. The results were compatible with the Standard Model prediction, and the process was nearly observed with 4.6 sigma significance above the background-only hypothesis. Now, I will present the latest measurement, which adds 165 fb⁻¹ of Run 3 data at a centre-of-mass energy of 13.6 TeV, as well as a statistical combination with the Run 2 result. The Run 3 measurement uses 3- and 4-lepton final states, which were statistically limited in Run 2, and are expected to be in Run 3 as well. The production cross sections times the branching ratios, measured both inclusively (the first such HWW measurement in Run 3) and in the context of the Simplified Template Cross Section Framework (one of the first Run 3 STXS measurements by any LHC experiment), will be reported. This is expected to be the most precise measurement of VHWW and likely a new observation of the process.

        Speaker: Callum McCracken (University of British Columbia (CA))
      • 158
        Studying electrical petals for the ATLAS Inner Tracker Upgrade using a micro-focused X-ray beam

        For the High-Luminosity Upgrade of the Large Hadron Collider, the ATLAS experiment will replace its current Inner Detector with the all-silicon Inner Tracker (ITk), which consists of pixel and strip systems. Relative to the current detector, the ITk features larger forward coverage, an order-of-magnitude increase in granularity, and improved radiation hardness. The ITk strip system's forward detectors or "end-caps" will consist of 7,000 silicon sensor modules. These modules are mounted onto large, double-sided support structures called "petals" which provide readout, control, power, and cooling to the underlying modules. Canada is responsible for assembling 1,500 modules into petals, corresponding to approximately 83 petals or 22% of the end-cap detectors.

        This contribution presents the test results of a Canadian-made petal in an X-ray beam at the Diamond Light Source in Didcot, United Kingdom. It is the first ever beam test of a petal and the largest subassembly of the ITk tested in a beam to date. The beam test demonstrated the ability to reliably operate a petal for many hours. Additionally, the micron-level precision of the X-ray beam can resolve individual readout channels, enabling simultaneous measurements of mechanical properties on both sides of the petal. These mechanical properties inform the physics performance of the detector and include the relative placement of modules, which affects the hermiticity of the detector, and the relative rotation of modules, which affects the intrinsic resolution of the detector. The measured properties demonstrate excellent consistency with their specifications, verifying the quality of the petal assembly procedure. Altogether, this measurement serves as an example for future beam tests of large detector components.

        Speaker: Matthew Basso (TRIUMF (CA))
      • 159
        Systematics and Calibration for the KDK+ experiment

        KDK and KDK+ research is focused on measuring the rare decays of Potassium-40
        (40K). The KDK experiment recently recorded the first experimental measurement
        of 40K electron capture decay directly to the ground state of 40Ar. KDK+ will follow
        this with an experiment aimed at obtaining a refined experimental decay constant
        for the β+ decay in 40K as the currently accepted value is in tension with modern
        theoretical predictions. The initial measurement will be performed using a liquid
        scintillator due to a high counting efficiency for β+ decays, and they can be loaded
        with a variety of chemicals for calibration purposes. This liquid scintillator will be
        contained in a 300 mL vessel with PMTs placed on either end and placed in the
        centre bore of an annulus with four Sodium Iodide crystals surrounding it
        measured by PMTs. The experiment will use a liquid scintillator loaded with
        40K. The emitted positron will be detected in the liquid scintillator itself, while the
        two 512 keV gammas from its annihilation will be detected by an NaI annulus
        around the liquid. This apparatus requires systematic calibration of the NaI
        crystals, the liquid scintillator, and the PMTs measuring them. Work has been
        done to calibrate the liquid scintillator vessel, as well as an extensive investigation
        into the methodology for loading potassium into a liquid scintillator. We have also
        studied the light yield of the loaded liquid scintillator, as well as its stability over
        durations of several weeks and compared these results to known stable
        scintillators.

        Speaker: Cameron Ingo (Queen's)
    • (DCMMP) T1-12 | (DPMCM)
    • (DPMB) T1-2 | (DPMB)
      • 160
        Title to follow.

        TBA

        Speaker: Dr Claire Foottit (Department of Radiology, Faculty of Medicine, University of Ottawa)
      • 161
        Does suppressing inflammation response in lung radiotherapy protect the lung cancer cells

        Radiation-induced lung injury, characterized by chronic inflammation and fibrosis, limits the dose we can give in lung cancer radiotherapy. Drugs that block the inflammatory/fibrosis response have been shown to protect normal lung tissues from radiation. In this study, we wish to determine if one of these drugs also protect the lung cancer cells.
        To initiate this study, we developed and validated a protocol to irradiate cell cultures in 12 well plates with an orthovoltage x-ray unit. This is followed by employing clonogenic survival assays to assess the effects of a drug that blocks fibrosis response in a non-small cell lung cancer (NSCLC) cell line A549. Specifically, the drug we employed is a novel compound (peptide) that blocks cells’ RHAMM (Receptors for Hyaluronan Mediated Motility) receptors from being activated.
        Dosimetric accuracy of the irradiation setup was verified using GAFchromic films after calibration. Cell survival was quantified using clonogenic assays after optimization of seeding density across preliminary trials. We obtained the radiation cell survival curve by radiating between 2-10 Gy in 2 Gy increments. NSCLC cells were then pre-treated with peptide concentrations of 0.5, 5, and 50 μM prior to 4 Gy irradiation. Sensitizer enhancement ratios (SER) and statistical significance were calculated.
        The GAF film exponential curve fit had an R^2 value of 0.996 and a mean difference of 3.1% in a test trial. Dose profiles underneath the wells had a 95.6% flatness and a mean difference of 5.5%. Across two independent peptide trials, no statistically significant difference between peptide-treated and control was observed. At 4 Gy, SER values for the three concentrations were 1.17 (0.5 μM, p=0.68), 1.15 (5.0 μM, p=0.72), and 1.18 (50.0 μM, p=0.60).
        Preliminary results indicate that peptide pre-treatment did not alter radiation response under the test conditions. Ongoing work includes adding the peptide after the radiation, radiating at higher doses, and better modelling the tumour microenvironment using spheroids.
        This study assessed a novel anti-fibrosis drug for use in lung cancer cells. We integrated biological assays with physics dosimetry to provide foundational data to increase therapeutic ratios for lung radiotherapy.

        Speaker: Garrett Kirk (Western University)
      • 162
        Establishing the limit of detection of laser-induced breakdown spectroscopy for Escherichia coli in artificial cerebrospinal fluid for the diagnosis of bacterial meningitis

        Bacterial meningitis is a severe and potentially lethal infection of the meninges, afflicting more than a million people per year globally. Delays in treatment have been shown to increase both mortality rates and the strain on the healthcare system. In an effort to develop a near-instantaneous and clinically simple diagnostic test for bacterial meningitis, our group has shown that laser-induced breakdown spectroscopy (LIBS) has the capability to detect bacterial pathogens in various media to a high-degree of accuracy, including Escherichia coli and Mycobacterium smegmatis in artificial cerebrospinal fluid (aCSF) to simulate a meningitis infection. Ongoing work seeks to establish the current detectable limit of bacterial presence within aCSF and subsequently reduce the lower bound, as needed, to create a viable diagnostic tool.

        In this work, the lower limit of detection of LIBS was determined by assaying several concentrations of E. coli in aCSF on a nitrocellulose medium. Optical densitometry measurements were performed to measure and fix concentrations of eight test suspensions. Dilutions ranging from 26 500 down to 180 colony forming units per laser shot were created. A 1064 nm Nd:YAG laser with 8 mJ pulse energy was focused to 100 µm spot size. Each laser shot created a microplasma and the resulting atomic emission was dispersed by a high-resolution Échelle spectrometer to produce a spectrum spanning 200 nm to 800 nm. An artificial neural network with principal component analysis preprocessing was constructed to identify the presence of bacterial cells in the spectra. In addition, a partial least squares discriminant analysis was performed on a model comprised of 15 line intensities of observed elements from the full spectrum. Both models were used to ascertain the lower limit of detection.

        The sensitivities and specificities of each test will be presented. The lower limit of detection from these data will be analyzed and discussed. Future studies contributing to the reduction of this lower bound will be discussed, including dismembrating samples, introducing a double-centrifugation system, and investigating the dependence of this lower limit on the ablation laser wavelength.

        Speaker: Mr Abdullah Mustafa (University of Windsor)
      • 163
        Optical assessment of the hemodynamic and metabolic response to elevated intracranial pressure in piglets

        Elevated intracranial pressure (ICP) is a common postnatal complication in premature infants and poses life-threatening risks to the developing brain. While early detection is crucial to patient outcomes, limited diagnostic techniques currently exist for continuous monitoring. The objective of this work was to assess the sensitivity of cerebral hemodynamics and metabolism to abrupt increases in ICP. Data were collected from 7 newborn piglets using an in-house built hybrid near-infrared spectroscopy (NIRS) and diffuse correlation spectroscopy (DCS) system. The NIRS subsystem was developed by pairing a broadband halogen lamp for emission with a charge-coupled-device-based spectrometer for detection. The DCS subsystem consisted of a 785-nm, long-coherence length laser for emission and a single photon counting module for detection. A fitting routine based on the diffusion approximation for a semi-infinite homogenous medium was used to quantify NIRS/DCS parameters. Specifically, NIRS was used to quantify hemoglobin oxygenation to assess the microvascular oxygen supply-consumption balance, as well as the redox state of cytochrome-c-oxidase to assess aerobic metabolism. DCS was used to monitor cerebral blood flow. ICP was gradually increased with a saline infusion into the ventricles. Segmented linear regression of cerebral blood flow, oxygenation, and metabolism revealed distinct breakpoints in the ICP level beyond which the slopes of changes in the parameters became substantially steeper. More specifically, while cerebral blood flow and hemoglobin oxygenation rapidly decreased with the induction of intracranial hypertension, the redox state of cytochrome-c-oxidase remained stable, indicating preserved metabolism despite the early hemodynamic compromise. These findings suggest the existence of compensatory mechanisms that resist hemodynamic and metabolic changes until a critical threshold of ICP is exceeded. This hybrid NIRS/DCS system is a promising non-invasive neuromonitoring tool for detecting early signs of elevated ICP to guide clinical management. Future work will aim to validate these findings in a diverse clinical population.

        Speaker: Rasa Eskandari
      • 164
        Characterization of ionizing radiation induced changes in MC38 murine colon carcinoma cells using Raman spectroscopy

        Purpose
        To develop a high-resolution Raman spectroscopy (RS) technique that can detect biochemical changes induced in MC38 murine colon carcinoma cells exposed to a range of doses of ionizing radiation.

        Methods
        MC38 cells were cultured on quartz substrates and irradiated using 6 MV X-rays from a medical linear accelerator to doses of 0, 0.25, 0.5, 1, 2, 5 and 10 Gy. The cells were fixed at 24 and 48 hours post irradiation, with two independent sample sets prepared for each timepoint. Confocal Raman measurements were performed using a custom-built microscope with a 785 nm excitation source and a spatial resolution of 1 µm. Raman spectra from individual cells were collected over a 3 x 3 grid with a 3 µm step size over a 7 x 7 µm2 intracellular region of interest. For each dose and timepoint, a total of 25 or 100 cells were sampled from. All data was subjected to standard spectroscopic preprocessing including vector normalization. Partial Least Squares Discriminant analysis was utilized to perform binary classifications between doses, and the subsequent loadings vectors were used to identify specific Raman bands associated with exposure to radiation.

        Results
        Several features of the MC38 Raman spectrum demonstrated dose-dependent changes in response to radiation. At higher doses (5, 10 Gy) significant decreases of up to 30% compared to controls (0 Gy) were observed for Raman bands corresponding to DNA/RNA (782, 813, 1089, 1336, 1575 cm-1), suggesting a decrease in concentration of DNA/RNA due to exposure to ionizing radiation. Conversely, the relative intensity of some peaks associated with proteins (1001, 1448, 1665 cm-1) increased by up to 20%. Comparing the 24 and 48 hr timepoints shows that some features associated with DNA/RNA (782, 1089, 1575 cm-1) increased by up to 11% for the 48 hr timepoint compared to the 24 hr timepoint.

        Conclusions
        Our work demonstrates the ability of RS to delineate the biochemical response of MC38 cells to ionizing radiation and has potential applicability to the low-dose exposure range (<0.1 Gy) encountered in environmental, occupational and clinical settings.

        Speaker: Connor McNairn (Carleton University)
      • 165
        Monte Carlo evaluation of energy deposition for a novel system for microdosimetry and cell radiation response

        Purpose: The overarching goal is to develop a novel microdosimetry system for investigating energy deposition and cellular responses to low-dose radiation using Raman spectroscopy (RS) and Monte Carlo (MC) simulations. The present work focuses on establishing the MC simulation framework used to quantify stochastic energy deposition within cells and at the cell-detector interface.

        Methods: MC38 murine cancer cells are cultured directly on the radiochromic film (RCF), forming an integrated cell-detector system for simultaneous assessment of cellular dose deposition and biological response. Each plate containing the cells-RCF setup is irradiated using a 6-MV clinical linac photon beam with doses ranging from 0-500 mGy. MC simulations of radiation transport and energy deposition are performed using EGSnrc (egs_brachy) to replicate the experimental irradiations. Specific energy (z = energy/mass) is quantified within (i) individual cell nuclei (ii) RS sampling volumes ($6~\mu\text{m}^3$, 3×3 grid) within the nucleus, and (iii) the active layer of the RCF to investigate cell-detector dose correlations.

        Results: MC-derived specific energy distributions reveal pronounced stochastic variation in energy deposition in cells at low doses. The relative standard deviation of specific energy ($\sigma_z$/ z̄) within cell nuclei decreases with increasing dose, from 52% at 3.5 mGy to 4% at 500 mGy, reflecting reduced variation in energy deposition at higher doses. Comparison of the mean specific energy (z̄) deposited in cell nuclei (n=189) with that deposited in the RCF active layer directly beneath the cells shows close agreement, with differences <2% across all doses. The mean specific energy within RS-relevant sampling volumes also agrees with that in the RCF within 2% across the full dose range.

        Conclusion: MC simulations show close agreement between energy deposited in cell nuclei and underlying RCF validating the integrated cell-RCF system for microdosimetry. Ongoing work includes RS analysis of irradiated cells, complementary cell viability and toxicity assays, and the application of machine learning methods to directly correlate MC-derived microdosimetric quantities with radiation-induced biochemical changes measured by RS.

        Speaker: Prarthana Pasricha (Carleton University)
    • (DAMOPC) T1-3 | (DPAMPC)
      • 166
        Optimization of nanoscale surface textures for enhanced light management in photonic devices

        III-V semiconductor photonic devices, including solar cells and photonic power converters, have superior conversion efficiencies compared to their silicon counterparts. However, high material costs limit widespread use. Light trapping strategies such as nanoscale surface texturing can enhance absorption and improve device performance. In this work, a 3D optical model was employed to investigate front surface texturing for light management in gallium arsenide (GaAs)-based photonic devices. Finite-difference time-domain simulations were used to solve Maxwell’s equations and compute the electric field distribution under 850 nm illumination at a power density of 1 W/cm². The optical electron-hole pair generation rate in the GaAs absorber layer was used as a figure of merit for light trapping. Texture dimensions were optimized using a particle swarm algorithm, and multiple pattern geometries were compared. Square textures were found to be better than circular ones, and optimized surface texture designs achieved a >2X improvement in generation rate relative to an untextured surface. These results indicate the promise of surface texturing for improved light management and higher III-V photonic device efficiency. Ongoing work focuses on incorporating a back reflector to further enhance device performance.

        Speaker: Alison Clarke (University of Ottawa)
      • 167
        Diffusion of light-driven bacteriorhodopsin nanodiscs

        This project investigates bacteriorhodopsin (bR) embedded in lipid nanodisks as a potential light-activated nanoscale motor. Bacteriorhodopsin pumps ions when exposed to light, which we hypothesize will act as a source of propulsion for the nanodisks, in addition to their Brownian diffusion. We construct bR-nanodisk “particles” and study their motions in the absence and presence of light. Using fluorescence correlation spectroscopy (FCS), we measure how photoactivation affects the diffusion of light-activated particles, compared to inactive controls. This study will help determine whether light can drive active motion at the nanoscale and contribute to a better understanding of nanoscale transport processes in biological and synthetic systems.

        Speaker: William Seavey (University of Guelph)
      • 168
        Photoactive Silver Nanoclusters with Optically Switchable Ligands for Organic Solar Cell Applications

        Different from inorganic thin-film solar cells relying on stacked donor (p-type doped) and acceptor (n-type doped) layers of the same material, the photoactive layer of polymeric organic photovoltaics (OPVs) make use of two distinct polymer donor and molecular acceptor materials. These are mixed at the nanoscale in a bulk heterojunction thin film to capitalize on the very fast relaxation rates and short diffusion lengths of electron-hole (e-h) pairs in organic soft matter. While the root cause for short e-h lifetimes in organic semiconducting polymers is still an intriguing subject of debate, a lot of the underlying photophysics can be understood by utilizing, in an OPV device, novel 'smart' molecular acceptors with precisely controlled optical properties which can act as 'photo-switches' turning 'on' and 'off' the e-h pair diffusion and dissociation through a change in their conformation.
        Our research addresses this challenge by fabricating specific bi-functional systems based on molecular nanoclusters. Metallic nanoclusters containing a deterministic number of metal atoms sit at the intersection between molecular materials and metal nanoparticles, and they are uniquely suited to this end. Our approach utilizes a carbonate-templated Ag₂₀ nanocluster, designed in-house, which has been surface-functionalized with azobenzene derivatives to impart a photo-switchable behavior. The precise atomic structure of the functionalized nanocluster was confirmed by single-crystal X-ray diffraction (SCXRD). Not only this tailored ligand shell enables optical 'on'/'off' control but also confers significant photostability, greatly slowing the oxidation of the Ag₂₀ silver core. Reversible trans-cis isomerization was monitored via ultraviolet-visible (UV-Vis) optical spectroscopy, showing near-quantitative conversion and robust cyclability. The ability to optically modulate the cluster’s electronic properties highlights its potential as a molecular switch for photovoltaic activity in OPVs. Current work focuses on exploring the solid-state properties and integrating C₆₀/C₇₀ fullerenes to realize a single-component switch-acceptor hybrid material.

        Speaker: Mr Michael Szukalo (Western University)
      • 169
        Design and Behavior of Multistack Layer–Based Spaceplates

        The physical size of optical imaging systems is one of the greatest constraints on their use, limiting the performance and deployment of a range of systems from telescopes to mobile phone cameras. Spaceplates are nonlocal optical devices that compress free-space propagation into a shorter distance, paving the way for more compact optical systems, potentially even thin flat cameras. Here, we explore the behaviour of a multistack layer based spaceplate and its limits.

        Speaker: Yaryna Mamchur (University of Ottawa)
      • 170
        A Tunable Platform for Writing Multiplexed Volume Holograms

        Within both science and industry, volume holograms have found countless applications; examples include data storage, signal processing and structured light generation. In practice, a hologram is produced by the interference of two light waves (reference and object) on a photosensitive medium. The resulting interference pattern is imprinted onto the medium as a modulation in its refractive index or transmission. With holography, it is possible to create holographic optical elements (HOE) that apply transformations to the input field using principles of diffractions. Volume Bragg gratings (VBG) are among the simplest HOEs. A VBG is a device whose periodic refractive index causes it to diffract light within a narrow angular range (called the Bragg condition) while leaving any other light unaffected. They can be created holographically if both writing waves are plane waves and the recording material is thick relative to the wavelength of light used. While the production of single VBGs is a straightforward task, there has been a growing interest in manufacturing VBGs that are multiplexed, namely, several VBGs overlapping with different Bragg angles and orientations at the same position in the medium. Recent work on multiplexed VBGs has been based on writing holograms sequentially by making manual adjustments. These schemes lack flexibility in the kinds of holograms that can be written and do not lend themselves well to automating the task.

        We present a configuration for holographically writing multiplexed VBGs using a spatial light modulator (SLM) that requires no moving parts. A beam is sent to a single SLM, where it is split into two beams separated by a programmable angle. These pair beams are recombined on a holographic plate in a Mach-Zehnder-like configuration, with lenses ensuring they strike the same area regardless of their separation at the SLM. Rapid switching of the SLM pattern allows for several VBGs to be written in a cyclical fashion. The approach promises great flexibility in the breadth of holograms that can be multiplexed with little effort on the part of the user.

        Speaker: Simon Lamontagne (University of Ottawa)
      • 171
        Dispersion Profile Engineering for Quantum Communication via Optical Solitons and Integrated Microresonators

        Optical solitons and microresonator frequency combs offer a powerful platform for quantum communication, but their performance is critically determined by the underlying dispersion profile of the guiding structure. This work explores dispersion engineering strategies for integrated waveguides and ring resonators to realize controlled anomalous group-velocity dispersion suitable for soliton formation, low-distortion pulse propagation, and quantum state preservation [1].
        Waveguide cross-sections are optimized to obtain smooth, weakly anomalous dispersion near the quantum channel wavelength, while accounting for realistic material dispersion and Kerr nonlinearity. This approach is extended to ring resonators, where geometric design and azimuthally modulated structures shape the integrated dispersion of cavity modes. The resulting engineered dispersion profiles support fundamental soliton propagation, dispersion-managed long-distance links, and microcavity Kerr solitons[2].
        Design guidelines are provided for on-chip sources and channels tailored to time-bin and phase-encoded quantum communication protocols, enabling preservation of phase and photon-number correlations essential for quantum information processing.

        Speaker: Zara Tehrani (Carleton University)
      • 11:45
        Discussion / Networking
    • (DCMMP) T1-4 | (DPMCM)
    • (DPE) T1-5
      • 172
        Piloting Inquiry-Based Learning in First-Year Physics for Engineers

        Curriculum change within universities is usually slow and incremental. In contrast, Physics Education Research has matured rapidly over the past decades, giving us clear evidence about which pedagogies reliably improve student learning. The gap between what we know and what is routinely implemented in practice is often wide and continues to grow.

        PHYSICS 1D03/1E03, the first year mechanics and E&M courses for engineering students at McMaster, present a rare opportunity to help close this gap. Prompted by institutional changes, the department has elected to overhaul these courses completely. Rather than a series of small tweaks, we are rebuilding them from the ground up.

        While all course components are changing, lectures and labs anchor the redesign. The new lecture structure emphasizes student exploration and collaborative problem solving in real world engineering contexts. Building on the wealth of active-learning research, we are adopting a Productive Failure model that introduces challenging, meaningful problems early to promote deeper sense making and motivate learning. Meanwhile, the lab program is shifting from prescriptive, confirmatory exercises to inquiry based labs that prioritize experimental technique, decision making, and expert-like habits of mind.

        This talk will describe the rationale and process behind the overhaul, including consultations with stakeholders, learning-goal mapping, and the development of materials. The redesigned mechanics course will be piloted in May 2026, and we will report preliminary data on student engagement and learning. The emphasis will be on lessons learned in practice: what worked well, what did not, and what questions we still need to answer before full implementation with 1300 students in the Fall 2026 semester.

        Speaker: Eamonn Corrigan (McMaster University)
      • 173
        Is there space for poetry in the physics classroom?

        Numerous educators have successfully integrated poetry in the high-school and undergraduate classroom to teach subjects across STEM, including algebra, geometry, trigonometry, statistics, biology, and in medical education. However, with the notable exception of courses like “astronomy for poets,” the poetic impulse is less prominent among physicists. Paul Dirac famously argued that “the aim of science is to make difficult things understandable in a simpler way; the aim of poetry is to state simple things in an incomprehensible way. The two are incompatible.” Based on my experience incorporating poetry as a high-school mathematics teacher and current research on arts-integrated physics, I will argue against Dirac’s view, claiming that there is space for poetry in physics education. This presentation will propose and illustrate practical ways to incorporate poetry into physics pedagogy, offering example assignments for fellow educators at both the high-school and undergraduate levels.

        Speaker: Cristian Ramirez Rodriguez (University of New Brunswick)
      • 174
        Construction de l'esprit scientifique par l'apprentissage des lois physiques: mauvais rôle de l'intelligence artificiel

        L’enseignement des lois régissant des phénomènes physiques se heurte à des représentations et perceptions qui freinent le développement de l’esprit scientifique des apprenants. En effet, les conceptions des apprenants ne sont pas généralement prises en compte dans le processus enseignement-apprentissage. Lorsqu’elles sont prises en compte, elles sont comme des erreurs à corriger ou à détruire (Giordan 1987). Ces éléments ont pour conséquences une mauvaise, voire une absence d’acquisition de connaissances et de compétences chez l’apprenant du fait de la gestion inappropriée de ses conceptions. Il faut donc s’appuyer sur les conceptions de l’apprenant pour en construire de nouvelles, en se référant au processus de construction des connaissances et compétences issu de l’élaboration des lois physiques. Les conceptions des apprenants se basant de plus en plus sur des connaissances issues des sources numériques pas toujours fiables rendent ce défi plus complexe. Cette communication aura pour but tout d’abord de démontrer comment l’apprentissage des lois physiques, accompagné par une démarche pédagogique dite inductive, facilite le développement de l’esprit scientifique chez les apprenants. Ensuite, il s’agira de démontrer que ce processus est en difficulté de construction du fait qu’il fait désormais face à un usage inapproprié de l’intelligence artificielle. Enfin, nous esquissons quelques pistes de solutions pour faire de l’IA un adjuvant à la construction de l’esprit scientifique.

        Speaker: Loïc NGOU ZEUFO (Cégep Beauce-Appalaches)
      • 175
        Learning By Doing: Physics of Pilates

        By treating the body as a mechanical system, this interactive workshop invites participants to examine how forces and torques manifest themselves during popular Pilates low-impact exercises that most students are already familiar with and therefore can relate to. Workshop participants will develop an intuitive understanding of center of mass and the equilibrium concepts. The full scope of the workshop activities includes: pre-exercise prediction, physical enactment, and post-exercise reflection. The sense-making is facilitated by the use of multiple representations, including kinesthetic experience, free-body diagrams, and equations of static equilibrium for forces and torques. The workshop models study sessions that can be customized for university-level introductory physics class or a high school physics class studying dynamics and statics topics. The workshop encourages but does not require participants to complete Pilates exercises, therefore making it accessible to participants of varied physical abilities.

        Speaker: Tetyana Antimirova (Toronto Metropolitan University (formerly Ryerson University))
    • (DNP) T1-6 New Facilities and Techniques for Nuclear Physics | Nouvelles installations et techniques pour la physique nucléaire (DPN) T1-6
      • 176
        TRIUMF-ARIEL: Tripling TRIUMF's RIB capabilities¶

        TRIUMF's ISAC facility operates ISOL targets under high-power particle irradiation up to 500 MeV and 100 μA of current, producing Radioactive Ion Beams (RIBs) for Canadians and international nuclear and particle physics experiments. The ARIEL facility (Advanced Rare IsotopE Laboratory) is currently under construction with the objective to add two RIBs, in addition to the RIB already being produced by the existing ISAC facility. One ARIEL station will receive a driver beam of 500 MeV protons, up to 100 μA from TRIUMF’s H- cyclotron. The other ARIEL station will utilize an electron beam from the new superconducting linear accelerator, with energy up to 35 MeV and up to 100 kW beam power. The addition of these two ISOL targets enables the delivery of three simultaneous RIB beams to different experiments, while concurrently producing radioisotopes for medical applications.
        This contribution will describe the target station and its completion status, and will highlight the recent qualification tests that have been performed on its core components in our offline facility. The predicted beam intensities from the additional two stations will be presented, highlighting the main strengths and weaknesses of this combined facility. Moreover, the current status of the ARIEL facilities will be discussed, along with the roadmap their completion and ramp-up.

        Speaker: Luca Egoriti (TRIUMF)
      • 177
        Hybrid cluster algorithms and SiPM characterization for the Barrel Imaging Calorimeter at the EIC

        The upcoming Electron-Ion Collider (EIC) at Brookhaven National Laboratory will study the quark and gluon contributions to the mass, spin, and dynamics of nuclear structure. The Barrel Imaging Calorimeter (BIC) at the EIC uses a hybrid system that combines high-performance sampling calorimetry based on lead-scintillating fiber (Pb/ScFi) technology with silicon sensors for shower profiling. The calorimeter integrates energy measurements from Pb/ScFi layers with precise particle position information provided by layers of AstroPix sensors. The contributions from the University of Manitoba focus on several aspects of BIC development, including hardware prototyping, clustering algorithms, and physics simulations.

        The ScFi layers of the BIC use silicon photomultiplier (SiPM) based readout; therefore, fiber testing with SiPMs is a crucial component of this work. In addition, cluster reconstruction of ScFi hits follows an Island Clustering algorithm, while AstroPix follows an Imaging Topological Clustering algorithm. In this presentation, I will discuss the development of a combined clustering algorithm for both ScFi and AstroPix, as well as fiber testing using both photomultiplier tubes (PMTs) and SiPMs for readout.

        Speaker: Akshaya Vijay (University Of Manitoba)
      • 178
        Ultracold Neutron Transport and Simulation Studies for the TUCAN nEDM Experiment

        The TUCAN(TRIUMF UltraCold Advanced Neutron) collaboration aims to measure the neutron electric dipole moment (nEDM) with the world's highest sensitivity, $10^{-27}\,e\cdot\mathrm{cm}$, which is an order of magnitude better than the current best sensitivity. A nonzero nEDM would shed light on new sources of Charge-Parity violation beyond the Standard Model. A nonzero nEDM has the potential to answer one of the biggest mysteries in the universe, which is the matter-antimatter asymmetry problem, and this also leads us to understand why we exist in the universe. At TRIUMF, the experiment utilizes Liquid Deuterium (LD2) and Heavy water (D2O) as moderators and superfluid Helium (He-II) as a converter to produce a high ultracold neutron flux, and measures the tiny shift in neutron spin precession that indicates nonzero nEDM by precisely controlling magnetic and electric fields. To improve the sensitivity, various quantities have been analyzed, such as the neutron storage lifetime of our UCN guide, and many components, such as a Y switch that controls the direction of UCNs, have been developed to efficiently transport UCNs. Furthermore, a Monte Carlo simulation called PENTrack is used to understand the UCN behaviour in the system. In this presentation, I will introduce the purpose of the TUCAN nEDM experiment, the data analysis and simulation methods used to study the UCN transport and storage properties, and the development of the Y switch.

        Speaker: Ryunosuke Chiba (TRIUMF, SFU)
      • 179
        Commissioning Progress of PENeLOPE at TRIUMF

        The neutron lifetime is a fundamental parameter in particle physics and cosmology, with important implications for tests of Standard Model consistency and predictions of light-element abundances in the early universe. More precise neutron lifetime measurements are needed both to test these predictions and to help resolve the long-standing neutron lifetime puzzle. This puzzle refers to a 3.8 σ discrepancy between neutron lifetime results from beam experiments and ultracold neutron trap experiments. While the origin of this discrepancy remains unclear, it has motivated efforts to improve experimental precision and to explore possible explanations ranging from unaccounted systematic effects to physics beyond the Standard Model, including exotic neutron decay channels.

        PENeLOPE (Precision Experiment on the Neutron Lifetime Operating with Proton Extraction) is a magneto-gravitational ultracold neutron trap experiment designed to measure the neutron lifetime with a target precision of 0.1 s or better. Originally developed at the Technical University of Munich, PENeLOPE was relocated to TRIUMF in late 2023 and has since entered its first commissioning campaign at the facility. This phase focuses on reassembly of the apparatus and commissioning of the cryogenic and superconducting magnet systems. An initial cryostat cooldown attempt in early 2025 motivated a series of targeted improvements to the cryogenic infrastructure. Current efforts are directed toward implementing these upgrades and preparing for subsequent cooldowns aimed at achieving stable magnet operation and initiating magnet quench training. These commissioning activities represent key steps toward full operation of PENeLOPE.

        Speaker: Dennis Salazar
      • 180
        Fission Product Xenon for Rare-Event Detectors

        Following decades of independent experimental development, detectors exploiting the useful physical properties of xenon have emerged as leading tools to search for hypothesized physics: WIMP dark matter and Majorana neutrinos.

        As current xenon-based experiments approach the end of their operational lifetimes, plans for future detectors with substantially greater exposure are being developed. Achieving such exposure requires correspondingly larger xenon target masses. Among the many challenges associated with constructing a tonne-scale xenon detector, the acquisition of xenon itself is paramount. The requirement for tonnes (even kilotonnes) of xenon drives up the cost of these experiments and makes their development susceptible to the waverings of a limited market of atmospheric xenon.

        An alternative source of xenon could be found in used nuclear fuel (UNF). Xenon is the single most dominant fission product and is present in UNF at concentrations thousands of times greater than in the atmosphere. Canada’s fleet of CANDU reactors has been accumulating UNF inventory for over half a century. In this talk, I will discuss the possibility of tapping into this resource and what potential steps may be required.

        Speaker: Regan Ross (McGill University)
      • 181
        Standoff Neutron and Gamma Radiation Detection for Thermalhydraulic Safeguard Development of the McMaster Nuclear Reactor

        Thermalhydraulic power analysis is one of the most important considered metrics when operating a nuclear reactor. Not only does it establish smooth control of operations within a nuclear reactor, but can act as a safeguard – a preventative method to ensure peaceful, nonproliferated nuclear reactor operation.
        Standoff radiation detection refers to the technique of continuous analysis of background radiation within a nuclear reactor, which may serve as a “double-check” to conventional thermalhydraulic power measurements obtained by thermocouples. To determine if standoff radiation detection serves as a proper safeguarding technique, an experimental standoff analysis of two types of radioactive decay were explored at the McMaster Nuclear Reactor. Both thermal neutron decay and nitrogen-16 gamma decay were measured by helium-3 proportional counters and a sodium iodide crystal scintillator respectively. The presence of thermal neutrons is indicative of in-core flux, while the presence of nitrogen-16 gammas are indicative of fast neutrons within the coolant. Both decays are characteristic of nuclear reactor operation, expectedly resulting in a linear correlation to thermalhydraulic power. To develop such a safeguard, both experimental measurements and theoretical confirmations were considered against actual thermalhydraulic power measurements of the McMaster Nuclear Reactor over a sixteen-month period. Disruptions in operation such as open beamports, abrupt shutdowns, refueling cycles, and poisons introduced challenges when analyzing experimental data. However, these effects were important to be accounted for in the development of a safeguard for the McMaster Nuclear Reactor, but may also be extended to other reactor designs.
        The results stemming from the analysis aid in supporting prior work in the field, proving standoff radiation detection as a realistic technique for nuclear reactor safeguarding.

        Speaker: Bafrin Ali (McMaster University)
    • (DPP) T1-7 Laser Plasma Interaction & Complex Plasmas | Interaction laser-plasma et plasmas complexes (DPP)
      • 182
        Single-shot reconstruction of electron beam longitudinal phase space

        Laser and plasma wakefield accelerators are promising for many applications such as future TeV electron-positron colliders and X-ray free electron lasers (XFELs). These applications, generally, require high beam quality in terms of energy spread, emittance, shot-to-shot stability, etc. To achieve high beam quality, one has to precisely diagnose the beam dynamics during acceleration. This is difficult owing to the highly nonlinear acceleration process and the sub-µm, sub-fs spatial-temporal diagnostic requirements. Here, we report on a single-shot longitudinal phase-space reconstruction diagnostic for electron beams in a laser wakefield accelerator via the experimental observation of distinct periodic modulations in the angularly resolved spectra. Such modulated angular spectra arise as a result of the direct interaction between the ultra-relativistic electron beam and the laser driver in the presence of the wakefield. A constrained theoretical model for the coupled oscillator, assisted by a genetic algorithm, was used to recreate the experimental electron spectra and fully reconstruct the longitudinal phase-space distribution of the electron beam with a temporal resolution of ∼1.3 fs. In particular, it revealed the slice energy spread of the electron beam, which is important to measure for applications such as XFELs. In our experiment, the root-mean-square slice energy spread retrieved is bounded at 9.9 MeV, corresponding to a 0.9-3.0% relative spread, despite the overall GeV energy beam having ∼100% relative energy spread. Particle-in-cell simulations demonstrate that our method also applies for electron beams from traditional accelerators. We show that periodically modulated electron spectra can be induced via either direct laser-electron interaction in vacuum or in a beam-driven plasma wakefield accelerator.

        Speaker: Yong Ma (University of Michigan)
      • 183
        Ultra-High Field Ponderomotive Acceleration of Electrons

        Our group has recently demonstrated that relativistic electron acceleration can be achieved directly in ambient air by tightly focusing few-cycle infrared pulses with high-numerical-aperture optics, using mJ-class femtosecond laser systems. Owing to minimal B-integral accumulation which prevents intensity clamping, relativistic peak intensities approaching 1e19 W/cm² have been achieved, resulting in electron beams with energies up to 1.4 MeV and dose rates of 0.15 Gy/s.

        Building on these experimental results, we present a combined theoretical and numerical investigation aimed at identifying the physical mechanism responsible for the acceleration and optimizing its performance. An analytical model for linearly polarized tightly focused ultrashort laser fields reflected by high-NA mirrors is derived and coupled to fully three-dimensional Particle-In-Cell simulations. By varying the laser wavelength (0.8–7 µm) and normalized vector potential (a₀ = 3.6–7.0), we confirm that acceleration is governed by the relativistic ponderomotive force, leading to preferential forward emission. A maximum electron kinetic energy of ≈1.4 MeV is predicted near a central wavelength of 1.8 µm, consistent with experimental results.

        We further investigate the influence of polarization and plasma density on acceleration efficiency. In the generated near-critical plasmas, linearly and circularly polarized pulses outperform radially polarized pulses in terms of both maximum energy and total accelerated charge, while linear polarization yields lower divergence beams. Scaling analyses indicate that multi-MeV electrons can be generated with charges above 1 nanocoulomb.

        These results establish tightly focused mJ-class lasers as a promising platform for compact, high-repetition-rate multi-MeV electron sources with potential applications in ultrafast imaging and FLASH radiotherapy.

        Speaker: François Fillion-Gourdeau (IPL and INRS-EMT)
      • 184
        High-Energy Electron Acceleration in Nonlinear Laser Wakefield Regimes

        Laser wakefield acceleration (LWFA) enables ultra-compact, multi-GeV electron sources, offering a promising route toward next-generation accelerators for medical, industrial, and high-energy physics applications. Using multi-dimensional PIC simulations with SMILEI, this work examines nonlinear wakefield evolution in a two-stage LWFA configuration. In the first stage, a relativistically intense laser (a₀=7.7, λ₀=0.8 μm, w₀=20 μm, E=30 J, τ=30 fs) propagates through a helium plasma (nₑ=7×10¹⁸cm⁻³), producing strongly nonlinear bubble dynamics, including disruption and merging near the relativistic wave-breaking limit. These effects enhance trapping efficiency and yield a quasi-monoenergetic ~1 GeV, 1 nC beam with low divergence and sub-10% energy spread. This beam is injected into a second LWFA stage at the same density to study coupling physics. Injection-delay tuning between the electron beam and laser pulse significantly boosts performance, enabling injected electrons to reach ~2.5 GeV and background-trapped electrons to exceed 3 GeV while maintaining charge and spectral quality. These results demonstrate a practical strategy for compact, high-quality multi-GeV plasma accelerators relevant for ultrafast X-ray generation, neutron sources, radiation therapy, and future high-energy particle collider development.

        Speaker: RASHID UL HAQ (Shanghai Instititute of Optics and Fine Mechanics, University of Chinese Academy of Sciences, China.)
      • 185
        XUV Diagnostic System for Measuring the Focusing of Protons Produced by High Intensity Laser Pulses

        High intensity (≥1018 W/cm2) short pulse (≤ps) lasers allow the efficient generation of MeV energy protons from thin foils. Since the protons are emitted perpendicular to the rear surface of the foil, hemispherical foils may act to focus the protons [1-3]. This focusing is crucial to achieving sufficient proton flux on target for the proton fast ignition scheme of inertial confinement fusion. [4]

        To study the generation and focusing of protons beams, a secondary thin foil target can be placed in their path. The protons will heat the secondary target to several eV temperatures, which in turn will radiate Planckian blackbody emission. To study the proton heating a diagnostic system has been developed to image a narrow band of XUV emission, with photon energies around 93 eV, from the rear surface of the secondary target. This technique was used to study proton focusing at the ZEUS Petawatt laser facility at the University of Michigan using a laser focal spot of similar size to the hemispherical radius and laser energies of around 40 J. The time-integrated brightness of emission can be used to calculate the spatial distribution of the temperature [5,6] and from this the amount of proton heating which has occurred. Time-dependent hydrodynamic modeling coupled with radiation emission calculations are used to compare to the measurements. The XUV imaging system design, sample data, and analysis techniques are discussed with some initial proton heating images obtained.

        1 – C. McGuffey, et. al. “Focussing Protons from a Kilojoule Laser for Intense Beam Heating using Proximal Target Structure”, Sci. Rep. Vol. 10, No. 9415, (2020)
        2 – T. Bartal, et. al. “Focusing of short-pulse high-intensity laser-accelerated proton beams”, Nat. Phys. Vol. 8, pp. 139-142, (2012)
        3 - P.K. Patel, et al. "Isochoric heating of solid-density matter with an ultrafast proton beam." Phys. Rev. Lett. Vol. 91, No. 12, (2003)
        4 – M. Roth, et. al. “Fast Ignition by Intense Laser-Accelerated Proton Beams”, Phys. Rev. Lett. Vol. 86, 436, (2001)
        5 – P. Gu, et. al. “Measurements of electron and proton heating temperatures from extreme-ultraviolet light images at 68 eV in petawatt laser experiments”, Rev. Sci. Inst. 77, 113101, (2006)
        6 – R. Snavely et. al. “Laser generated proton beam focusing and high temperature isochoric heating of solid matter”, PoP. 14, 092703, (2007)

        Speaker: John Gjevre (University of Alberta)
      • 186
        Examining the impact of laser spot size and laser energy on nanosecond laser ablation to enhance laser-induced breakdown spectroscopy

        Understanding the complex nonlinear physical processes involved in laser ablation remains a challenge. A fundamental understanding of laser ablation can greatly aid in determining optimal parameters for detection techniques such as laser-induced breakdown spectroscopy (LIBS), a high-sensitivity technique for measuring the elemental composition of a material using an energetic laser to produce a plasma at its surface. Here, a 9 ns, 532 nm Nd:YAG laser was used to perform LIBS measurements of aluminum, brass, and coffee samples using three different laser focal spots (21 µm, 47 µm, and 96 µm) and five pulse energies (16 mJ to 82 mJ). For all samples, it was found that the total integrated plasma emission and plasma temperature increased with laser intensity; however, the rate of increase varied with focal spot size. Notably, dividing the total plasma emission by the spot area aligned all data points linearly with laser intensity. To interpret these results, experimental data were fitted using the laser ablation model by R. Hergenroder [1]. The good agreement between simulation and experiment provides insight into the relationship between mass ablation and focused spot size, as well as the increase in plasma temperature due to inverse bremsstrahlung as laser energy increases.

        [1] R. Hergenröder, “A model of non-congruent laser ablation as a source of fractionation effects in LA-ICP-MS,” J. Anal. At. Spectrom. 21(5), 505–516 (2006).

        Speaker: Shubho Mohajan (University of Alberta)
    • (PPD) T1-8 | (PPD)
      • 187
        The Hyper-K experiment

        Hyper-Kamiokande is the next-generation underground water Cherenkov detector being built in Japan and will study neutrinos originating from the J-PARC accelerator complex, the atmosphere and astrophysical sources. Its rich physics program includes the measurement of the leptonic CP-violation phase, neutrino astronomy, as well as the search for proton decay.

        Hyper-Kamiokande consists of a large cylindrical tank with 258 thousand tonnes of ultrapure water as its detection medium and it is placed 295 m from the neutrino beam production at J-PARC. The project also consists of a suite of near detectors including a movable 500 tonne Water Cherenkov detector placed 1 km from J-PARC called the Intermediate Water Cherenkov Detector (IWCD) and it is used to constrain major systematic uncertainties related to neutrino interactions in the water.

        In this talk I will present an overview of the Hyper-Kamiokande experiment, details of its detector design and provide the construction status.

        Speaker: Guillermo Arturo Fiorentini Aguirre (Carleton University (CA))
      • 188
        Calibration of Hyper-Kamiokande photomultiplier tubes using diffuse light

        Hyper-Kamiokande is a next-generation water Cherenkov detector currently under construction for the precision measurements of neutrino oscillations and the search for charge-parity symmetry violation. The far detector will contain 260,000 metric tons of ultra-pure water and will be the world’s largest underground neutrino experiment, instrumented with 20,000 20-inch photomultiplier tubes (PMTs) and 800 multi-PMTs. To achieve the required sensitivity, systematic uncertainties must be tightly controlled. A major source of detector systematic uncertainties is the PMT response, in particular the detection efficiency and angular acceptance. This study aims to reduce detector systematic uncertainties by measuring PMT angular response. A method for calibrating PMTs using five diffuse LED light sources integrated into 200 multi-PMTs is presented. A toy data is simulated accounting for the detector geometry, diffuser emission profile, and PMT response characteristics. In the current model PMT angular response is assumed to depend only on the zenith angle of detection. LED intensities and angular response parameters are determined via iterative minimization.

        Speaker: Polina Sukhova (University of Windsor)
      • 189
        Rare-Event Searches with Scintillating Bubble Chambers

        Bubble chamber technology has been a mainstay of the particle detection landscape since its invention in the 1950s. By employing a superheated liquid target, bubble chambers enable dual-channel particle identification through correlated visual and acoustic signals generated during bubble nucleation. The Scintillating Bubble Chamber (SBC) experiment is advancing this well-established detector concept to extend sensitivity into the sub-keV nuclear-recoil regime. The use of a scintillating target, such as liquid argon, provides additional information for energy reconstruction while preserving the intrinsic electronic-recoil suppression of bubble chambers, enabling SBC to target energy thresholds of O(100 eV). The collaboration is concurrently developing two near-identical 10-kg liquid-argon detectors at Fermilab (SBC-LAr10) and SNOLAB (SBC-SNOLAB). SBC-LAr10 has recently begun operation, and this talk will present early results from this initial phase of operation. Its physics programme focuses on calibration and coherent elastic neutrino–nucleus scattering studies, while also serving as an engineering testbed for the forthcoming SBC-SNOLAB detector. The latter detector is being purpose-built for GeV-scale dark matter searches, enabled by the ultra-low-background environment at SNOLAB. This talk will highlight recent hardware developments at Queen’s University in preparation for the construction of SBC-SNOLAB, including testing of an array of custom FBK VUV HD3 silicon photomultipliers, camera integration, and the development of data acquisition and event readout methodologies, building on the successes and lessons learned from SBC-LAr10.

        Speaker: Carter Garrah (Queen's University)
      • 190
        Molecular and Fluid Dynamics Simulations for the Scintillating Bubble Chamber Experiment

        The Scintillating Bubble Chamber (SBC) dark matter experiment uses superheated xenon-doped liquid argon to probe new areas of the WIMP mass-cross section parameter space. Accurate simulations of the behaviour of both the liquid argon active volume and CF4 hydraulic fluid components of the detector are integral to ensuring smooth operation and an accurate understanding of nucleation threshold. SBC's active volume is superheated at 120K and 30 PSI, well outside the regime where experimental measurements of argon's physical and thermal properties are available. This necessitates a novel combination of molecular dynamics and fluid dynamics simulations to accurately model the fluid behaviour of the detector. The approach taken for molecular dynamics simulations applies statistical physics to derive bulk physical and thermal properties of the fluids from equilibrium molecular dynamics simulations. These properties can then be used for bulk fluid simulations of the detector’s hydraulic fluid and the argon volume in both the cold and superheated regions. These simulations, in turn, are then iteratively improved to better reflect conditions in the detector as preliminary temperature and pressure data becomes available during commissioning. This talk will discuss the integration of molecular dynamics simulation, bulk fluid simulation, and experimental data to model SBC's fluid behaviour.

        Speaker: Ezri Wyman (Queen's University)
      • 191
        New Modular Analysis Framework for the Multi-channel Read-out of the NEWS-G Dark Matter Experiment for Directional sensitivity

        New Experiments With Spheres-Gas (NEWS-G) direct dark matter search experiment is using spherical proportional counters filled with light noble gases, enabling sensitivity to very low-mass dark matter parameter space. It is equipped with a high-voltage 11-anode sensor “ACHINOS”, to detect electrons from ionizing radiation from only two combined channels currently. Independent readout of the 11 anodes would enable directional sensitivity in the NEWS-G detector, a key capability for discriminating dark matter signals from solar neutrino backgrounds. To detect signals from each anode of the ACHINOS, a new low-noise charge-sensitive preamplifier has been developed at the University of Alberta and tested with a small-scale NEWS-G detector in the Piro Lab. The analysis and processing of the signal have been performed with a new modular data analysis framework, which will be presented. Preliminary results show an improved event selection and promise for future multi-anode readout scalability. These results also provide quantitative feedback for hardware refinement and establish analysis tools needed for scalable, multi-channel spherical proportional counter readout.

        Speaker: Mana Sakaguchi (University of Alberta)
      • 192
        Developing Scalable Readout Electronics for Directional Sensitivity of the NEWS-G Dark Matter Experiment

        The New Experiments With Spheres-Gas (NEWS-G) uses a spherical proportional counter filled with a light noble gas mixture to directly detect light dark matter particles. The detector consists of a large sphere, equipped with a high-voltage 11-anode sensor, the “ACHINOS”, to detect electrons from ionizing radiation. The current electronics setup limits the readout to two channels, combining the 5 top anodes and the 6 bottom anodes as the north and south channels, respectively. Having the ability to separately detect electrons from the 11 (or more) anodes will provide the NEWS-G detector with directional sensitivity; this will be a significant asset for discriminating against the ultimate challenge posed by the solar neutrino background in dark matter searches. In this presentation, a new scalable electronics design for the readout of the 11-anode sensor currently being built at the University of Alberta will be presented. The testing of this new electronics layout on a small-scale NEWS-G detector in the Piro Lab will be discussed, as well as future plans for scalability beyond 11 channels.

        Speaker: Julia Redinger (University of Alberta)
    • (DQI) T1-9 | (DIQ)
      • 193
        Does the NPA hierarchy attain the commuting operator value at some finite level?

        The NPA hierarchy, of Navascues, Pironio, and Acin, is a widely used tool for analyzing nonlocality across a range of settings in quantum information science. In the context of nonlocal games, this hierarchy of semidefinite programs (SDPs) provides a (non-increasing) sequence of upper bounds, converging (in the limit) to the commuting operator value. In fact, a corollary of the landmark MIP*=RE result employs the NPA hierarchy to conclude that there are nonlocal games for which the quantum (entangled) value is strictly less than the commuting operator value, providing a separation between the quantum and commuting operator models for quantum correlations. Despite much recent advancement, a fundamental question about the value of NPA hierarchy remained open. Given a nonlocal game, does there exist a (finite) level for which the NPA hierarchy attains the commuting operator value? Perhaps surprisingly, a positive and negative answer to this question is consistent with the recent undecidability results for the quantum and commuting operator values. In this talk, I will show that the above question has a negative answer. Moreover, I will discuss how the answer to the above question follows from a seemingly unrelated question about the computability of the commuting operator value.

        Speaker: Prof. Connor Paddock (University of Calgary)
      • 194
        Logical Bell Inequalities, Magic States, and Lambda Polytopes

        We investigate the role of logical Bell inequalities in identifying quantum states useful for magic state distillation. Our approach provides an alternative route to characterizing contextuality as a resource, entirely within the logical framework. In particular, we derive logical Bell inequalities that delineate the faces of the simulable polytope of a single qudit \textemdash the set of states non-negatively represented by the discrete Wigner function. This formulation highlights how logical Bell inequalities can serve as a tool for pinpointing non-stabilizer states relevant to quantum computational advantage, and it opens a pathway toward relating contextuality-based characterizations of magic to lambda-polytopes.

        Speaker: Sanchit Srivastava (Institute for Quantum Computing, University of Waterloo)
      • 195
        Generating bosonic code state with noisy ancilla

        Quantum error correction (QEC) is essential for protecting quantum information from ubiquitous environmental noises. The Gottesman-Kitaev-Preskill (GKP) state is a particularly promising bosonic QEC code state that can protect the information encoded in an oscillator against dominant bosonic errors. Approximate GKP states can be generated by performing adaptive phase estimation with an ancilla qubit, yet existing proposals usually require a high-quality qubit to attain good GKP states. Here, we propose a method to generate GKP states with a dephasing qubit ancilla. We find that dephasing error would alter and reduce the inferred knowledge about phase, but these effects can simply be compensated by more rounds of phase estimation. We introduce a modified phase estimation process that optimizes the inferred phase knowledge with ancilla dephasing fully considered. Our simulation demonstrate that high-fidelity GKP states can still be generated even when the ancilla dephasing is significant. Our results improve the practicality of bosonic QEC in realistic platforms.

        Speaker: Thomas Lin (Simon Fraser University)
      • 196
        Entanglement sharing schemes

        We ask how quantum correlations can be distributed among many subsystems. To address this, we define entanglement sharing schemes (ESS) where certain pairs of subsystems allow entanglement to be recovered via local operations, while other pairs must not. ESS schemes come in two variants, one where the partner system with which entanglement should be prepared is known, and one where it is not. In the case of known partners, we fully characterize the access structures realizable for ESS when using stabilizer states, and construct efficient schemes for threshold access structures, and give a conjecture for the access structures realizable with general states. In the unknown partner case, we again give a complete characterization in the stabilizer setting, additionally give a complete characterization of the case where there are no restrictions on unauthorized pairs, and we prove a set of necessary conditions on general schemes which we conjecture are also sufficient. Finally, we give an application of the theory of entanglement sharing to resolve an open problem related to the distribution of entanglement in response to time-sensitive requests in quantum networks. Based on https://arxiv.org/pdf/2509.21462.

        Speaker: Stanley Miao (Perimeter Institute for Theoretical Physics)
      • 197
        Entanglement summoning

        In an entanglement summoning task, a set of distributed, cooperating parties attempt to fulfill requests to prepare entanglement between distant locations. The parties share limited communication resources: timing constraints may require the entangled state to be prepared before some pairs of distant parties can communicate, and a restricted set of links in a quantum network may further constrain communication. Such a task may arise in real-world quantum networking scenarios, where a set of labs under timing constraints may want to generate entanglement between a specific pair of labs to implement a quantum protocol.

        Building on previous work, we continue the characterization of entanglement summoning. In particular, we provide an if and only if condition on entanglement summoning tasks with only bidirected causal connections between parties, and we offer a set of sufficient conditions that addresses the most general case, encompassing both oriented and bidirected causal connections. Our results rely on the recent development of entanglement sharing schemes.

        Speaker: Lana Bozanic
      • 198
        Alice and Rob Revisited: How quantum reference frames can provide new insights into entanglement degradation

        Relativistic Quantum Information (RQI) uses tools from quantum information theory to quantify physical phenomena at the interface between general relativity and quantum theory. An important result from RQI describes how a maximally entangled state shared by Alice and Rob is degraded when Rob undergoes uniform accelerated motion relative to Alice.
        In this talk, I will revisit the problem of entanglement degradation by considering how other quantum resources are affected by acceleration. Specifically, I will answer the question of whether other quantum resources can "offset" the degradation observed in the entanglement using recent results about quantum reference frames.

        Speaker: Everett Patterson (Dept. of Physics and Astronomy, University of Waterloo)
    • 12:00
      Break for Lunch (12h00-13h30) | Pause pour dîner (12h00-13h30)
    • Postdoctoral Fellows and Early Career Researchers Networking Session (CAP) | Session de réseautage pour les boursiers(boursières) postdoctoraux et les chercheurs(chercheuses) en début de carrière (ACP)

      This networking session is for postdoctoral fellows and early career researchers attending the CAP Congress. Whether you’re looking for advice or simply interested in expanding your professional network, this session is a place to engage with peers facing similar challenges and opportunities at this early career stage, and to learn more about CAP programs and support. For example, CAP is considering lengthening the number of years of lower membership fees for Full-Member Early Career from 4 years to 6 and adding a Councillor to represent early career physicists on the CAP Advisory Council. The session aims to help connect the early-career physics community within Canada.

    • Science Policy Session - Topic TBD
      • 12:00
        Forging CAP's Path: Science Policy & Advocacy in a Turbulent World

        US scientific disruptions and declining public trust are eroding the research social contract and challenging open science globally, with inevitable impacts on Canada. This interactive CAP session aims to proactively shape our community's response. Following an introduction by CAP’s Director of Science Policy and Advocacy and identification of key themes, members will collaborate in groups on a topic of common interest. Together, we will develop concrete recommendations and strategic priorities for CAP's science policy and advocacy. Your insights are crucial for equipping CAP to champion a resilient, open scientific future for Canada. Come to contribute or just learn about science policy in Canada.

    • T-MEDAL1 Medalist Talk | Conférence du lauréat de la médaille
    • 14:00
      Travel Time | Déplacement
    • (DAPI) T2-1 | (DPAI)
      • 199
        Radon-222 Screening Capability and Research at SNOLAB

        Radon-222 is a limiting background in many leading dark matter and low-energy neutrino experiments. At SNOLAB, we have various radon instruments dedicated to material screening and to the measurement of radon concentration in N₂ gas systems and in ultra-pure water. My talk will focus on describing these instruments. In addition, it will describe a recent development aimed at improving our N₂ gas assay capability.

        Speaker: Nasim Fatemighomi (SNOLAB)
      • 200
        Radon measurement in a low-background experiment using a hybrid trapping system

        A new hybrid system for trapping radon-222 emitted by an object is proposed. This text presents the calibration steps carried out at SNOLAB facility concerning low radon concentrations (~ µBq/m³). High-sensitivity background noise measurement is required in dark matter experiments to ensure that the system meets the projected noise limit (0.1 event/year for the PICO experiment). Indeed, radon is a known source of neutron-induced alpha background radiation (α, n). To quantify it, an ultra-pure charcoal with a low intrinsic radioactivity (0.2 mBq/kg) and suitable adsorption capacity (greater than that of bronze wool) is used. However, it is prone to clogging at low temperatures (<-60 °C), unlike bronze wool. By combining them, it is theoretically possible to exploit the advantages of both. Specifically, the length of the column, bonded to 15 g of carbon in a one-inch tube, is halved thanks to the bronze wool that separates and frames it. To assess the validity of this device, the trapping efficiency is determined at various cryogenic temperatures and assay time. It is possible to perform the measurement with a limited gas load (<0.1 mbar) in the emanating system with bronze wool as the sole trapping material. This method is not employed because it is ineffective for assessing huge emanating entities. This study involves allowing the system to emanate Rn under a certain amount of N₂. It provides a conservative measurement, as diffusion-induced emanation decreases with increasing vacuum pressure. The second approach does not require the introduction of N₂ into the system during emanations. N₂ is simply used to facilitate the transfer of Rn from the object to the trap. This strategy offers the greatest sensitivity. Note that a water trap is also used, as humidity can distort the measurement.

        Speaker: Hantz Nozard (Université de Montréal)
      • 201
        Improving SNOLAB Radon counting sensitivity with low-background ZnS

        Rn-222 progeny produce unwanted background events in underground rare-event searches including those for dark matter and neutrinoless double beta decay. ZnS(Ag)-coated Lucas cells were used during the SNO experiment to evaluate Radon emanation into light water and continue to be used for ex-situ measurements of Radon concentration in SNO+ and at SNOLAB for materials assays. Support for current and future experiments housed at SNOLAB motivates the development of new Lucas cells to further improve SNOLAB's capabilities for characterizing materials radio-purity with greater precision and sensitivity.
        In this presentation, Radon assays are introduced and developments for next-generation Lucas cells will be discussed. Topics include background characterization for Lucas cell components, fabrication methods, and development of ZnS synthesis and doping techniques. Future prospects and current investigations will be shared, including studies into elimination of U-238 chain impurities.

        Speaker: Peter Qin (SNOLAB)
      • 202
        Study on the seasonal variation of indoor radon levels across Canada

        Radon, a naturally occurring radioactive gas present in our homes, is the leading cause of lung cancer among non-smokers. With nearly 1 in 5 Canadian homes exceeding Health Canada’s radon guideline level of 200 Bq/m3, radon exposure remains a critical public health issue that is responsible for over 3,000 lung cancer deaths annually in Canada. As part of its National Radon Program, Health Canada consistently reviews and updates national radon risk guidance and resources to adapt to the evolving radon landscape in Canada, ensuring that its actions are evidence-based and effectively mitigate this health risk.

        Due to various factors affecting radon levels across Canada’s varied geography and climate, identifying data that directly answers key questions about Canadians' radon exposure can be challenging. Currently, Health Canada recommends Canadians test their home for 3-months during the heating season. This recommendation assumes that radon levels are at their highest during this period. However, there exists limited data that quantifies the extent, if at all, of the seasonal variation of residential radon levels throughout the various regions of Canada. To address this, the National Radon Program has been conducting year-round testing in homes since September, 2023.

        Health Canada will present recent progress on the study of seasonal variations of radon levels across Canada. The primary objective is to measure the variation of radon levels in Canadian homes over a 12-month period. By collecting seasonal measurements, it can be determined if the current testing recommendation provides a reasonable estimate of the average annual radon level in a home. Based on the results, new guidance may be developed for radon testing outside the heating season, widening the testing window and ultimately making testing easier and more accessible to Canadians.

        Speaker: Robert Stainforth (Health Canada, Government of Canada)
    • (DTP) T2-10 | (DPT)
      • 203
        The Large Magellanic Cloud and dark matter searches

        The Large Magellanic Cloud (LMC) can significantly impact the dark matter halo of the Milky Way, and boost the dark matter velocity distribution in the Solar neighborhood. Cosmological simulations that sample potential Milky Way formation histories are powerful tools, which can be used to characterize the signatures of the LMC’s interaction with the Milky Way, and can provide crucial insight on the LMC’s effect on the local dark matter distribution. I will discuss the impact of the LMC on the local dark matter distribution in state-of-the-art cosmological simulations. I will then present the implications for dark matter direct detection, considering both standard and non-standard dark matter interactions and different dark matter masses.

        Speaker: Nassim Bozorgnia
      • 204
        Jeans Model for the Shapes of Self-Interacting Dark Matter Halos

        The Jeans model for self-interacting dark matter (SIDM) halos has been shown in recent literature to reproduce, with surprising accuracy, the spherically averaged halo profiles inferred from observations and simulations of relaxed galaxies and galaxy clusters. However, in general, dark matter halos are not spherically symmetric, owing to asymmetric baryon distributions and residual triaxiality from hierarchical assembly. Halo shapes therefore provide an important observational discriminant: in the dense interior, frequent SIDM scatterings isotropize particle velocities and erase the ellipticity characteristic of collisionless CDM halos. This distinction is especially relevant for massive galaxies, where baryons can cause SIDM and CDM halos to have nearly identical spherically averaged density profiles but still exhibit markedly different shape profiles. In this talk, I present recent work extending the Jeans model to describe SIDM density and shape profiles beyond spherical symmetry. I introduce a novel “squashed” Jeans model that captures the physical requirement that multiple scatterings are needed to modify halo shapes while remaining computationally inexpensive compared to full axisymmetric treatments. I then show results validating the squashed model against cosmological hydrodynamical simulations for both SIDM and CDM, and conclude with an application to Milky Way rotation curve data.

        Speaker: Adam Smith-Orlik (York University)
      • 205
        Stochastic dark matter

        Motivated by discrete approaches to quantum gravity, we formulate the covariant Brownian motion of free particles described by a stochastic geodesic equation. At the level of the Fokker-Planck equation, this approach provides the unique covariant diffusion equation in the absence of a preferred frame. The uniqueness makes the equation the effective description of a wide range of possible quantum gravity effects. When applied to dark matter particles, it results in dynamical warming at late times, suppressing the matter power spectrum at small scales. Thus, we show that the model has potential for alleviating the S_8 tension.
        When applied to gravitons, the model predicts spreading and drifting of the gravitational-wave power spectrum. Thus, LISA will be able to place bounds on the massless diffusion constants, assuming it detects a peaked spectrum of primordial gravitational waves.

        Speaker: Dr Arad Nasiri (University of New Brunswick)
      • 206
        Visualizing the Principle of Least Action

        The Principle of Least Action is a guiding concept in fundamental physics with prominent appearances in many fields, including classical mechanics, general relativity, and quantum mechanics. The general principle is that physically realizable paths are those that extremize the ‘action’ relative to nearby paths. This work explores example systems that highlight important characteristics of the Principle of Least Action, and proposes visualizations of these systems that will give the reader a deeper intuitive understanding of this principle. We begin with a simple analytically solvable 1D spring system that allows for clear visualization of nearby paths and their actions; subsequent numerical simulations of more complex systems are then added to discuss further insights. These visualizations provide an accessible entry point to the Principle of Least Action and facilitate its understanding in educational contexts.

        Speaker: Ciaran McDonald-Jensen (Carleton University)
      • 207
        Wormholes

        Wormhole solutions in gravitational theories typically require exotic matter. Here we present a wormhole solution to the field equations of Einsteinian Cubic Gravity — a phenomenological competitor to general relativity that includes terms cubic in the curvature — that has no matter, exotic or otherwise. These purely gravitational wormhole geometries are asymptotically AdS but contain a geometric deficit at infinity. The deficit, interpreted as a global monopole, plays an essential role in our construction. We find that our wormhole solution satisfies traversablility criteria. We also find, for different parameters, a range of possible wormhole solutions.

        Speaker: Mengqi Lu (University of Waterloo)
    • (DPMB T2-11 | (DPMB)
    • (DCMMP) T2-12 | (DPMCM)
    • (DPMB) T2-2 | (DPMB)
      • 208
        Automatic vs manual segmentation for electron microscopy images in a male mouse

        Brains are composed of neurons, containing axons. Myelin aids in propagating electrical and chemical signals by insulating axons. Postmortem studies suggest that axon properties change with various neurological conditions. Imaging methods can aid in the diagnosis process and understanding how axons change over time. Electron microscopy (EM) is the current gold standard to measure microstructures in the brain due to its high resolution. Manual segmentation of EM images is extremely time-consuming, prohibiting large-scale studies. The ultimate goal of this project is to have an automated segmentation method for EM data that can accurately delineate axons, myelin, and other cells for comparison with a newly developed magnetic resonance (MR) in vivo method. The first step compared an automated segmentation model with the corresponding manual segmentations for axons. Eight EM images from one male mouse were manually segmented, defining intra-axonal space and myelin. The light microscopy model of Segment Anything for Microscopy (µsam) was used for automatic segmentation of the axons in the images. The manual and automated segmentation of each image was compared using the average Dice similarity coefficient (DSC), precision and accuracy. The average and standard deviation of the DSC, precision and accuracy over all images were 0.78±0.03, 0.63±0.04 and 0.70±0.03, respectively. Compared to the intra-axonal and myelin manual segmentations, the average DSC, precision and accuracy were 0.91±0.02, 0.93±0.05 and 0.85±0.03, respectively. Manual segmentation took approximately 42 hours per image, whereas the automated prediction was completed on average 3.9 seconds per image. The reduced segmentation time comes at the cost of µsam only being able to detect larger cells, whereas the longer manual segmentation time allowed for every cell to be segmented. The results demonstrate automated segmentation predictions better represent the intra-axonal space and myelin rather than just intra-axonal space. The next steps are to test other automated models to evaluate if they perform better compared to µsam and to MR results on the same tissue. Additional models should be investigated to determine if they are able to automatically segment other cells and separate the myelin from the axon.

        Speaker: Jessica de Kort (The University of Winnipeg)
      • 209
        Preclinical Evaluation of Assigning Anatomical Labels in Magnetic Resonance Imaging Using a Transgenic 5XFAD Mouse Model of Alzheimer’s Disease

        Alzheimer’s disease (AD) is the most common form of progressive neurodegenerative dementia and a leading cause of death worldwide. The definitive cause of AD remains unknown, but its development is a multifaceted etiology. Early AD diagnosis is crucial as pathology begins decades before symptoms appear, and its diagnosis can only be confirmed post-mortem. Imaging techniques such as magnetic resonance imaging (MRI) provide insight into the distinct patterns of brain inflammation, followed by degeneration. In both clinical and preclinical imaging, MRI enables non-invasive, longitudinal, volume-based quantification of regions of interest with sensitivity to multiple tissue properties. Despite advances in preclinical imaging, there is limited consensus on standardized, quantitative methods for assigning anatomical labels to brain regions in mouse models relating to cognitive decline. From a medical physics perspective, reliable anatomical labeling is essential for quantitative MRI, yet automated pipelines remain under-validated in preclinical disease models.
        This project addresses this challenge using the most aggressive disease model, 5XFAD transgenic mice, to detect early differences in brain anatomy. The primary objective is to determine whether MRI can be used as an early diagnostic tool by detecting age-dependent structural changes across the lifespan using a 1 T M2™ Compact High-Performance MRI System (Aspect Imaging) and a 3D imaging method. A major challenge in pre-clinical studies is the lack of accessibility and interrater variability in quantifying neurodegeneration. This project explores diverse MRI processing strategies, including manual segmentation using the Allen Mouse Brain Atlas and the computational pipeline AIDAmri. By exploring a variety of techniques, the anticipated outcome is to evaluate the robustness through cross-method agreement, age-dependent consistency, and sensitivity to preprocessing choices in the 5XFAD model, not previously characterized by the AIDAmri pipeline. This research advances techniques for automated anatomical labeling in preclinical MRI and supports the development of standardized imaging pipelines applicable to longitudinal therapeutic investigations of neurodegeneration.

        Speaker: Alex Stoinescu (University of Windsor)
      • 210
        Optimized magnetic resonance imaging sampling schemes for mapping severe magnetic field distortions near metal implants

        Magnetic resonance imaging (MRI) is a non-ionizing diagnostic technique that detects signals from hydrogen nuclear spins precessing in a strong static magnetic field, $B_0$. Metallic materials commonly used in orthopaedic and dental implants, such as titanium and cobalt-chromium alloys, introduce large magnetic susceptibility differences that significantly distort $B_0$. These distortions lead to rapid intravoxel signal dephasing, severe image artifacts, and signal voids, rendering conventional MRI measurements ineffective in the vicinity of implants.
        Previous work has demonstrated that pure phase encoding MRI approaches can be used to measure magnetic field distortions in such extreme off-resonance environments. However, the quality of the resulting field maps is often degraded by aliasing artifacts, particularly in later echo images. In this work, we propose an optimized data sampling scheme that balances image quality and acquisition time, substantially improving the robustness of field mapping near metal.
        We apply this approach to measure magnetic field distortions surrounding titanium and cobalt-chromium mouse hip implants. In addition, we extend the analysis from analytical solutions based on idealized geometries to numerical simulations using finite element methods. The strong agreement between experimental measurements and simulations demonstrates that the proposed sampling strategy enables reliable quantitative field mapping in the presence of severe susceptibility-induced distortions. These results provide a foundation for the development of improved spatial encoding schemes for MRI in extreme off-resonance environments.

        Speaker: Ms Mila Vasquez
    • (DAMOPC) T2-3 | (DPAMPC)
      • 211
        Progress toward a dynamic optical dipole trap for rapid evaporative cooling

        Cold-atom-based inertial sensors employ ultra-cold neutral atoms to improve sensitivity and accuracy. Evaporative cooling is a widely used process to reach ultra-cold temperatures, and dynamic optical dipole traps (ODTs) have demonstrated <100 nano-Kelvin in evaporation timescales of 1 s. We previously developed a numerical model that predicted the evolution of temperature and atom number during evaporative cooling processes based on experimental parameters. Using this model, we found that spatially-modulated optical traps can cool to lower temperatures while losing fewer atoms than unmodulated traps. Guided by numerical simulations, we report on the design and construction of a dynamic crossed-beam ODT used to cool thermal clouds of rubidium-87 atoms to the nano-Kelvin regime. Using a high-powered 1064 nm laser and two acousto-optic deflectors, we generate two independent time-varying optical potentials with user-defined spatial profiles. We also describe a fluorescence imaging system to measure the trap dynamics in situ. Our all-optical dynamic evaporation approach is applicable to a wide variety of neutral atom species.

        Speaker: Cristian Ramirez Rodriguez (University of New Brunswick)
      • 212
        Temperature-Dependent Absorption Spectroscopy of a 87 87 Rb Vapor Cell for EIT-Based Quantum Memory

        Electromagnetically induced transparency (EIT)–based quantum memories require an atomic ensemble with sufficient optical depth and stable frequency detuning conditions. In this work, we characterize a Rb-87 atomic vapor cell by measuring near-resonant absorption (transmission) spectra under controlled temperature settings, aiming to identify suitable operating conditions for subsequent EIT storage experiments. The transmitted probe signal is recorded during repeated laser scans across the D-line resonance region. To enable reliable comparisons across scans and temperatures, we implement an anchor-based piecewise frequency-axis alignment that compensates scan nonlinearity and slow drifts, using reproducible spectral landmarks as reference points. The aligned spectra are averaged to form a robust reference absorption profile, and key spectral metrics (e.g., absorption depth and effective linewidth) are tracked as functions of temperature. The measurements show a clear temperature dependence of the absorption strength, consistent with an increase in vapor number density, while the observed linewidth reflects combined broadening and decoherence contributions. These results provide (i) a practical temperature-dependent characterization of absorption background and effective optical depth and (ii) a repeatable relative-detuning reference for locating and monitoring EIT operating points in a warm-vapor quantum memory platform.Keywords: EIT, quantum memory, Rb-87 vapor cell, absorption spectroscopy, temperature dependence, frequency alignment

        Speaker: Qianzhu Li (Carleton University)
      • 213
        Advances in Laser Cooling with Type-II Magneto-Optical Traps for Cs and Li

        Magneto-optical traps (MOTs) are a central tool for producing and confining ultracold atomic gases. Most experiments rely on type-I MOTs, which exploit closed cycling transitions to provide strong radiation-pressure forces and efficient cooling. More recently, however, type-II MOTs, in which closed cycling transitions are absent, have been used in several laser cooling experiments.

        Experimental and theoretical studies have demonstrated that atom capture in a type II MOT can significantly suppress light-assisted collisions and re-absorption-induced heating. These effects reduce density-limiting loss mechanisms and lead to colder, higher-density samples that are well suited for subsequent loading into shallow optical traps.

        In this work, we present results on the efficient operation of a type-II MOT for Cs. We also report a characterization of a type-I MOT and ongoing progress toward the realization of a type-II MOT for lithium.

        Speaker: Forouzan Forouharmanesh
      • 214
        Ultracold radioactive atoms in an optical lattice

        Radioactive atoms and molecules are a new frontier in atomic physics currently being explored in cutting edge precision measurements. $^{221}$Fr’s nuclear octupole deformation gives ultracold FrAg molecules enhanced ability to detect hadronic CP violation. We are currently working towards studying atomic francium’s scattering properties through photoassociation spectroscopy. Toward this end, we have expanded TRIUMF’s francium trapping lab’s [1] capabilities by implementing absorption imaging for a more accurate measurement of trapped Fr number and an optical dipole trap for denser and colder samples.

        We report evidence of a 1D optical lattice in our apparatus in both Fr and Rb. Using these new capabilities, we have also measured $^{211}$Fr’s D2 atomic transition to an accuracy of 3 MHz. These results demonstrate a step forward in control over ultracold radioactive atoms. TRIUMF’s forthcoming Radioactive Molecules Laboratory will provide the infrastructure necessary for further development of techniques for precision studies of exotic radioactive atoms and molecules.

        1. M. Tandecki et al., J. of Instr., 8, 12006 (2013)
        Speaker: Andrew Lagno (University of Waterloo)
      • 215
        Design of an Untrapped Three-Photon Optical Clock with Bosonic Yb

        Atomic clocks provide the most precise time and frequency standards, with applications in navigation, telecommunications, and precision tests of fundamental physics. We present the design and ongoing implementation of an optical clock architecture based on a three-photon transition in untrapped bosonic ytterbium. By interrogating ballistic Yb atoms using a coherent three-photon transition and arranging the three clock lasers to cancel the net Doppler shift, this approach is designed to suppress dominant systematic effects associated with optical clocks, including Zeeman and lattice light shifts. Analytical modeling and numerical simulations describe the effective two-level dynamics of the three-photon transition and we optimize Ramsey interrogation pulse sequences under realistic experimental conditions. We also sweep the laser detunings in simulations of the atomic dynamics, revealing dressed-state features associated with intermediate resonances and identifying detuning regions that maintain high-contrast population transfer. For this clock architecture, the dominant sources of systematic uncertainty are expected to include second-order Zeeman shifts, blackbody radiation, and residual light shifts from the clock lasers.

        Speaker: QiLin Xue (University of Toronto)
    • (DCMMP) T2-4 | (DPMCM)
    • (DPE) T2-5 | (DEP)
      • 216
        The Farm as a Solar Energy Power Plant

        The concept of energy is one that many teachers struggle with. Because of its abstract nature, developing hands-on activities that address the idea of energy can be challenging. Additionally, the core concepts that energy is conserved and can be transformed means that basic ideas related to energy can and should be applied to systems in different scientific disciplines, which many teachers find difficult.

        This presentation describes a science outreach project that aims to achieve, through combination of two originally very different activities, something much greater than the sum of its parts: a hands-on experience that attempts to improve students’ understanding of the nature of energy transformation and conservation.

        For many years the outreach team of Lakehead University (through projects such as the Promoscience-supported “EcoReach”) have brought activities related to alternative energy to schools. Most of these extension opportunities involve examination of photovoltaic panels, small wind turbines and the energy they can harness. At the same time, another Promoscience-funded project, “Farm Lab”, has brought lessons about garden-based science activities to students that often involve growing vegetables, either in small-scale indoor greenhouses or outdoors in school, campus or community gardens.

        Our newest project combines these two types of activities to demonstrate, in a tangible and quantifiable way, how the solar energy captured by a photovoltaic panel and stored in a rechargeable battery compares with that captured and stored in a corn plant.

        The activity has been carried out with classes of students in grades 4,5 and 6, grade 9, first- and second-year undergraduate Engineering candidates and teacher candidates in their professional years of Lakehead’s Bachelor of Education program. The basic elements of the activity, successes and failures encountered with different groups and changes that have been made are discussed in this presentation, as well as future directions this project may take.

        Speaker: Christopher Murray (Lakehead University)
      • 217
        Muons in Classrooms; The BC MuSIC Project

        Particle physics is all around us! There are many ways we can detect particles, and one of main source of high energy particles come from outer space, in the form of cosmic rays. The Cosmic Watch, developed by Spencer Axani, is a cosmic muon detector small enough to hold in the palm of your hand. We are distributing these detectors to schools in Vancouver and greater BC to empower students with the ability to see cosmic muon events, network with other users to detect coincident cosmic events, and track solar activity over time. This talk will present an overview of the Cosmic Watch system, the technical requirements and data analysis for anyone interested in setting up a similar outreach project at their institute.

        Speaker: Emily Filmer (TRIUMF (CA))
      • 218
        in Memoriam - David Harrison

        Dr. Jason Harlow from University of Toronto discusses many Dr. Harrison's innovative contributions to teaching and learning, especially for first-year courses.

        Speaker: Jason Harlow (University of Toronto)
    • (DNP) T2-6 Nuclear Theory | Théorie nucléaire (DNP)
      • 219
        Nucleosynthesis for the lightest and the heaviest elements

        The lightest elements in the universe, such as most helium and some lithium, were forged within the first twenty minutes after the Big Bang through Big Bang Nucleosynthesis (BBN). The theory of BBN features a remarkably small core reaction network of a dozen key nuclear reactions. This theoretical simplicity, combined with precision reaction data from nuclear experiments and cosmological inputs from cosmic microwave background observations, allows BBN to yield high-precision abundance predictions with an accuracy rarely seen in other areas of nuclear astrophysics. By comparing these predictions with precision primordial abundance observations, BBN provides a rigorous test of the standard cosmology and serves as a sensitive probe for new physics beyond the Standard Model in ways complementary to terrestrial experimental search. In this talk, by incorporating the most recent observational inputs, we will firstly report the latest BBN calculations and its constraints on relevant cosmological parameters.

        Moving beyond the light elements, neutron captures are crucial processes to create elements heavier than iron on the opposite side of the nuclear chart. The specific neutron density at which neutron capture processes operate in their corresponding astrophysical sites is the primary determinant of their unique nucleosynthesis paths and resulting abundance patterns. The rapid neutron capture process (r-process) occurring in explosive events such as neutron star mergers with extremely abundant free neutron supplies is traditionally held responsible for the enrichment of actinides, in particular Thorium and Uranium. However, recent research suggested a possibility of synthesizing these actinides through the intermediate neutron capture process (i-process) in AGB stars with neutron densities many orders of magnitude lower than those required for the r-process. This possibility could fundamentally change our understanding of galactic chemical evolution. To explore the viability of this alternative scenario, we employed the PRISM code to simulate nucleosynthesis across a range of neutron injection strengths and timescales. We will conclude this talk by comparing specific nucleosynthetic examples where actinides are successfully forged against those where they are once created but depleted by subsequent nuclear reactions, identifying the conditions for i-process actinide survival.

        Speaker: Tsung-Han Yeh
      • 220
        Emulators for nuclear physics: from microscopic forces to many-body systems

        In recent years, there has been a concentrated effort to improve our understanding of the uncertainties that are inherent to solving the nuclear many-body problem. This work focuses on the implementation of emulators for both few- and many-body nuclear physics problems in order to properly handle these uncertainties. We explore and compare two different emulators for solutions of the three-nucleon Faddeev equations, which allows a determination of the full posterior distribution of the low energy constants describing the nuclear interaction in chiral effective field theory. The resulting distribution of interactions can then be used as input for state of the art emulators for auxiliary-field diffusion Monte Carlo calculations of light nuclei, thus propagating the inherent uncertainties in the nuclear interaction to the final many-body predictions.

        Speaker: Ryan Curry (University of Guelph)
      • 221
        A Coincidence Algebra bundle for Decay Quivers: An Algebraic Approach to Gamma-ray Spectroscopy

        Motivated by the need for a more comprehensive algebraic structure to calculate coincidence probabilities of a general decay scheme for gamma ray spectroscopy, we model the decay scheme, rather naturally, as a quiver through which we define a decay quiver. The path algebra of quivers is the underlying, more general, algebra for transition matrices that is typically used in modelling decay schemes. The path algebra allows for concatenation of transitions which affords the calculation of cascade probabilities. We extend the path algebra to allow for the multiplication of non-composable paths, i.e., transition that don't directly share a level connecting them. We define the coincidence algebra as the algebra that allows for such an extension and realize it as the fibres for a coincidence algebra bundle, the base space of which is the path algebra where decay schemes live. A given decay schemes coincidence probabilities are calculated on its fibre. Detection maps are defined as linear maps on the base space that map transition probabilities to detected probabilities.

        Speaker: Liam Schmidt (University of Guelph)
      • 222
        Ab Initio Nuclear Theory for Neutrinoless Double-Beta Decay

        The search for neutrinoless double-beta decay ($0\nu\beta\beta$) is among the most promising probes of physics beyond the Standard Model. Its observation would establish lepton-number violation, confirm the Majorana nature of neutrinos, and probe the absolute neutrino mass scale. As upcoming experiments aim to extend half-life sensitivities by up to two orders of magnitude, reliable nuclear matrix elements (NMEs) are critical: without them, neutrino masses and the underlying decay mechanisms cannot be meaningfully constrained. Since $0\nu\beta\beta$ is intrinsically a beyond-Standard-Model process, multiple mechanisms may contribute. While the standard light-neutrino exchange channel has been widely studied, exotic mechanisms—particularly those mediated by heavy particles—remain comparatively unexplored within nuclear theory. Heavy sterile Majorana neutrinos, in particular, are strongly motivated in many extensions of the Standard Model, where they may play a significant or even dominant role in $0\nu\beta\beta$ decay.

        We present the first ab initio calculations of the short-range NMEs required to describe exotic $0\nu\beta\beta$ exchange mechanisms in four experimentally relevant isotopes. Starting from two- and three-nucleon interactions derived from chiral effective field theory, we employ the in-medium similarity renormalization group to construct effective valence-space Hamiltonians and consistently evolved decay operators. Because heavy-particle exchange probes short distances, the resulting operators show strong sensitivity to the renormalization procedure. By varying chiral interactions and operator-renormalization schemes, we obtain NME ranges that are consistent with—but generally smaller than—those from phenomenological approaches. Finally, we apply our results with current experimental limits to the 3+1 model, assuming heavy-neutrino-exchange dominates the decay, and obtain constraints in the sterile-neutrino mixing-mass parameter space.

        Speaker: Alex Todd (McGill University)
      • 223
        Higher order electroweak radiative corrections in parity violating asymmetry using covariant approach

        We perform detailed calculations of electroweak radiative corrections to parity violating lepton scattering with a proton target up to quadratic and reducible two-loop level using a covariant approach. Our numerical results are presented at energies relevant for a variety of existing and proposed experimental programs such as Qweak, P2, MOLLER, MUSE, and experiments at the EIC. Analysis shows that such corrections at the Next-to-Next-to-Leading Order (NNLO) are quite significant and have to be included in searches of physics beyond the standard model, matching the increasing precision of the future experimental programs at low-energy scales.

        Speaker: Dr Mahumm Ghaffar (Memorial University of Newfoundland)
    • (DQI) T2-7 | (DIQ)
      • 224
        Towards implementation-secure and scalable quantum key distribution: protocols and networks

        Quantum key distribution (QKD) uses quantum mechanics to promise information-theoretic security between remote parties. It has received worldwide attention as a candidate protocol for next-generation secure communications, and it paves the way for large-scale quantum networks. However, practical components in the sender/receiver of a QKD system can still contain loopholes. Ensuring the implementation security of QKD is, as of today, still a major challenge in the field. In this talk, I will introduce two of our main contributions to the field in addressing this challenge: (1) asymmetric Measurement-Device-Independent (MDI) and Twin-Field (TF) QKD protocols, which close loopholes from the receiver’s detectors while also enabling high-performance and scalable quantum networks unrestrained by user location, as well as (2) fully passive QKD protocols, which remove loopholes from the sender's modulators, and they can further be combined with MDI-QKD/TF-QKD to simultaneously protect the sender and the receiver and enable more secure quantum networks. The above two new families of protocols represent a huge step toward building more implementation-secure QKD systems and future quantum networks with high performance and robustness.

        Speaker: Prof. Wenyuan (Mike) Wang (University of Calgary)
      • 225
        Learning How to Count: Individual Emitter Inference from Ensemble Photon Statistics

        We present progress on exploiting photon-number statistics to characterize ensembles of distinguishable light emitters beyond conventional resolution limits. The photon-number distribution of light collected from an ensemble can carry statistical signatures that reflect the properties of its constituents, even when relevant degrees of freedom such as time, frequency, or spatial mode cannot directly be resolved. A well-known example is the use of the second-order correlation function, $g^{(2)}$, to estimate the number of independent single-photon emitters within an ensemble.

        Building on this idea, we develop statistical models of photon emission to identify and quantify individual emitters from photon-number measurements, and we analyze the information-theoretic bounds governing this inference problem. We show that emitter identification remains feasible even when the emitters overlap within the detector response and are not individually resolvable.

        To demonstrate these ideas experimentally, we generate photons in ultrafast time bins separated by only a few picoseconds using group-velocity delays introduced by birefringent materials. The time bins follow thermal photon statistics produced by a spontaneous parametric down-conversion process. We detect the resulting light with photon-number-resolving transition-edge sensors capable of resolving up to ten photons within a single optical pulse. Although the detector is far too slow to resolve the individual time bins, we show that our statistical inference techniques can accurately retrieve the mean photon numbers of each bin from the measured photon-number distributions.

        Finally, to extend this framework beyond analytically tractable models, we employ normalizing-flow neural networks capable of learning arbitrary probability distributions. This approach allows us to incorporate realistic experimental effects such as optical loss, detector demultiplexing, and noise, enabling the analysis of increasingly complex emitter ensembles, including both thermal and single-photon sources.

        Speaker: Nicolas Dalbec-Constant (University of Ottawa)
      • 226
        Quantum Statistics of Pulsed Light

        The coherence of a light source is a key property that is used to determine most of the assumptions we can make about that light source. However, the coherence of light is commonly defined in multiple ways. Practically, the coherence time is often defined as the maximum time difference at which interference fringes are visible when a light source interferes with itself. We can define the coherence time of light sources like continuous wave lasers and thermal sources in this way, even though their coherence times arise from different statistical properties. This difference in statistical properties is important because it changes the measurement of light in many other experiments. For example, the coherence time for thermal light is often measured by the second order coherence, $g^{(2)}(\tau)$. For a thermal source, the $g^{(2)}(\tau)$ goes to the average intensity for $\tau$ that are large compared to the same coherence time at which interference fringes are visible. The $g^{(2)}(\tau)$ for a continuous wave laser is instead constant at the average intensity and does not depend on the coherence time. Pulsed light is multimode, unlike a continuous wave laser, but does not arise from an ensemble of independent sources like thermal light. Physically, this changes how pulsed light is time averaged in a measurement, which also changes how measurement results indicate the statistical properties of the light. We return to the foundations of quantum optics to find the statistics that can describe a beam of multimode light. We then predict the results of common coherence measurements like interference fringes and intensity interference. Based on the known results of these measurements with pulsed light, we determine a general quantum statistical description of pulsed light.

        Speaker: Daniel James (University of Toronto)
      • 227
        Development of a Quantum Network Testbed for Standards and Quantum Metrology

        The advent of reliable single-photon generation and detection has significantly accelerated the development of quantum technologies in real-world applications, including quantum-secure communication across metropolitan networks. To support the standardization of the operation and characterization of quantum devices, the NRC Metrology Research Centre has established a calibration platform for single-photon sources and detectors.

        Building on this foundation, we are developing a quantum network testbed based on entangled particles of light. Each node is equipped with photon-number-resolving detectors and quantum state analyzers, with real-time network synchronization. This testbed will provide a platform for academic and industrial validation of quantum hardware by enabling in-situ calibration within a networked configuration, thereby supporting the development of Canadian quantum photonics standards.

        In addition, it will offer infrastructure to investigate fundamental quantum correlations between entangled particles, with the aim of enhancing measurement precision beyond classical limits and approaching the ultimate bounds set by quantum mechanics.

        Speaker: Dr Jeongwan Jin (National Research Council Canada)
      • 228
        Discussion
    • (DPP) T2-8 Plasma and Material Processing | Traitement par plasma et traitement des matériaux (DPP)
      • 229
        Making of efficient p-type GaN by plasma-assisted molecular beam epitaxy

        Efficient p-type doping remains a central challenge in GaN epitaxy and associated devices such as light-emitting diodes, laser diodes, photoelectrochemical cells as well as quantum devices. Plasma-assisted molecular beam epitaxy (PA-MBE) offers a compelling alternative to metal-organic vapor phase epitaxy (MOVPE) due to its intrinsic advantages in Mg incorporation and the absence of post-growth activation—an especially critical benefit for tunnel-junction and polarization-engineered devices. Despite numerous reports of successful Mg doping, a comprehensive understanding of growth-regime-dependent incorporation behavior across III/V ratios and temperature ranges remains incomplete.

        In this work, we present a systematic investigation of Mg acceptor incorporation in GaN grown by PA-MBE under both Ga-rich and N-rich conditions spanning low (~580 °C) and high (~740 °C) temperature regimes. The growth temperature boundary near ~650 °C separates two fundamentally distinct kinetic regimes: in the low-temperature Ga-rich regime, a self-regulated Ga bilayer stabilizes the surface, while at higher temperatures excess Ga rapidly desorbs, eliminating bilayer formation. Although Ga-rich growth has traditionally been favored for achieving high crystalline quality, its influence on Mg incorporation efficiency requires further investigation.

        A comprehensive growth map was constructed to correlate Mg incorporation, surface morphology, and electrical properties as a function of III/V ratio and temperature. Secondary ion mass spectroscopy (SIMS) reveals significantly enhanced Mg incorporation efficiency (~80%) under N-rich conditions. In contrast, Ga-rich growth produces smoother surfaces with root-mean-square roughness of ~1–2 nm but exhibits reduced Mg incorporation. Room-temperature Hall measurements confirm tunable hole concentrations ranging from ~7 × 10¹⁷ cm⁻³ (Ga-rich) to ~2 × 10¹⁹ cm⁻³ (N-rich) at fixed Mg flux.These results establish a clear trade-off between surface morphology and Mg incorporation efficiency and provide a practical growth-regime guideline for optimizing p-type GaN. The findings are directly relevant to nitride-based light emitters, tunnel diodes, quantum emitters, PEC cells, and other advanced electronic and photonic device architectures.

        Speaker: Sharif Md. Sadaf (INRS)
      • 230
        Advances in Plasma-Based Waste Treatment

        This talk presents advanced approaches for plasma-based waste treatment. Different designs of plasma torches and generation systems are discussed, including RF, DC, and MW plasma, are analysed and compared for waste-to-energy and radioactive waste treatment applications. Novel plasma torch design is proposed to support different scales and types of waste treatment. Process engineering techniques for gasification and pyrolysis process are integrated with the radioactive waste treatment process, which are illustrated with waste characterization. The proposed approaches showed reduced waste treatment costs, risks, volumes, in addition to reduced greenhouse gas emissions and improved lifecycle performance. Plasma systems are utilized for nuclear and municipal waste treatment with analysis of different waste categories and types. Process design is discussed for plasma torch that can reduce the volume and lifecycle cost of waste processing. Simulation methods and experimental setups demonstrate lab-scale process technologies for plasma-based waste treatment.

        Speaker: Prof. Hossam Gaber (Ontario Tech University)
      • 231
        Strain Engineering in Epitaxial SrTiO₃ Thin Films Grown by Magnetron Sputtering on Vicinal MgO Substrates

        SrTiO₃ (STO), a functional perovskite oxide, is dielectric at room temperature but can become ferroelectric under strain or doping. Here, STO thin films are grown epitaxially on MgO substrates with vicinal surfaces flat terraces, with RMS roughness on the order of tens of picometers. The lattice mismatch between MgO (~4.2 Å) and STO (~3.9 Å) results in a tensile strain of approximately 7.8%, which is expected to induce strain-driven modifications of the material properties, potentially including ferroelectricity. STO films with different thicknesses were deposited, and the influence of RF power and deposition pressure was systematically investigated. AFM topography shows STO films replicate the vicinal MgO substrate structure, confirming epitaxial growth. XRD measurements detect the main peak of STO which is shifted due to the strain. The calculated d-spacing indicates an increase in the out-of-plane lattice parameter, which is unexpected for a purely tensile strain. Furthermore, survey XPS spectra confirm the correct stoichiometry of the STO films. Film thickness was determined by XRR and TEM imaging. Films deposited at low RF power (10 W) and high pressure (40 mTorr) resulted in ultra-thin (~6 nm) layers with exceptionally smooth surfaces. Higher power and pressure led to thicker films, together with increased surface roughness. At the most extreme deposition conditions corresponding to 20 W of deposition power, features that can be associated to deposition and resputtering by negatively charged oxygen ions. At intermediate conditions (15 W, 20–40 mTorr), small surface holes appear. These features are likely initiated in two steps: 1) by re-sputtering of atoms from sites with the lowest binding energy, such as step edges; 2) surface relaxation that occurs in these heavily strained regions. The results indicate that strain plays a significant role in the evolution of surface morphology. We highlight how fine-tuning deposition parameters can control film thickness, surface roughness, and strain-driven surface features.

        Speaker: Elnaz Familsatarian (Institut national de la recherche scientifique, centre Énergie, Matériaux, Télécommunications (INRS-EMT) 1650 blvd Lionel Boulet, J3X 1P7 Varennes, QC, Canada)
      • 232
        Template-Free Growth of Monocrystalline Silver Nanosheets: Synergistic Effects of Interfacial Confinement and Oxidative Etching in Plasma-Liquid Systems

        The development of sustainable, "green" synthesis routes for two-dimensional (2D) noble metal nanostructures remains a significant challenge in materials science, often requiring complex surfactants, capping agents, or physical templates to induce anisotropy. In this work, we demonstrate a versatile, eco-friendly platform for the precision synthesis of high-aspect-ratio silver (Ag) nanostructures using nanosecond-pulsed electrical discharges over aqueous solutions. Unlike traditional colloidal chemistry, this method operates without the use of chemical stabilizers, surfactants, or physical templates, relying instead on the unique topological and chemical environment of the plasma-liquid interface.
        Our results show that the interaction between the plasma and a droplet of silver nitrate solution yields high-purity, monocrystalline Ag nanosheets. Under standard discharge conditions, these nanostructures exhibit average dimensions of approximately 500 nm in length and 8 nm in thickness. We propose that this extreme anisotropy is driven by a dual mechanism specific to the reactor geometry and plasma chemistry: spatial confinement and oxidative etching. Physically, the hemispherical shape of the droplet and the localized plasma interaction create a "quasi-two-dimensional" reaction zone that restricts vertical diffusion and promotes lateral migration of adatoms.
        Chemically, this physical confinement is reinforced by oxidative etching driven by hydrogen peroxide (H₂O₂) generated in situ at the interface. The abundant reactive oxygen species selectively etch specific crystallographic facets, destabilizing spherical growth and forcing the crystal to expand laterally. To validate this growth mechanism, control experiments were conducted with the direct introduction of H₂O₂ to the precursor solution. This modification resulted in a dramatic enhancement of anisotropic growth, yielding significantly larger monocrystalline sheets with dimensions reaching ~10 µm in length and ~78 nm in thickness.
        These findings confirm that the concentration of reactive oxygen species, bounded by the droplet interface, is the rate-limiting factor in defining the aspect ratio of the nanomaterials. This work provides critical mechanistic insights into mask-less lithography at the nanoscale and suggests a scalable pathway for synthesizing stabilizer-free 2D nanosheets of other transition metals and their alloys.

        Speaker: Lyes Sebih (Université de Montréal)
    • (PPD) T2-9 | (PPD)
      • 233
        Probing the Nature of Neutrinos with the Deep Underground Neutrino Experiment (DUNE)

        Neutrino oscillations have led to the discovery that neutrinos have nonzero masses. The current model describes the oscillation phenomenon in terms of three mixing angles and one CP-violating phase. Within the three-flavour paradigm, the other two major unknowns are the neutrino mass ordering and whether charge-parity is violated in the leptonic sector. The Deep Underground Neutrino Experiment (DUNE) is an ambitious research program in neutrino physics under construction at Fermilab and the Sanford Underground Research Facility (SURF), uniquely designed to measure many oscillation parameters and eventually test the validity of the oscillation model. Additionally, its design will offer the opportunity for non-beam related neutrino physics including supernova neutrinos, atmospheric neutrinos, and neutrinos originating in the core of the Sun. DUNE is a long baseline neutrino oscillation experiment with a detector close to the neutrino beam source at Fermilab (Near Detector) and a detector 1300 km away in South Dakota (Far Detector). Both the Near and Far Detector are based on Liquid Argon Time Projection Chamber (LArTPC) technology that measures neutrinos and antineutrinos over a wide range of energies. The Near Detector measures the unoscillated neutrino flux and constrains systematic uncertainties to predict the neutrino flux at the Far Detector, where the oscillated (anti-)neutrino beam is measured. The Far Detector will comprise at least two multi-kiloton underground LArTPCs, and the Near Detector will consist of a LArTPC module combined with two additional tracking detectors to obtain a robust characterization of the neutrino flux. In this talk, I will present the rich DUNE neutrino physics program, its sophisticated design, and the results from the Near and Far detector prototypes, together with the current status and future plans, emphasizing the contributions of the Canadian institutions involved.

        Speaker: Gianfranco Ingratta (York University - CA)
      • 234
        ICARUS at Fermilab: Experimental Program and Status

        The process of neutrino oscillation, where neutrinos created as one type (flavour) will be measured as another flavour after propagation with an oscillating probability, was confirmed by SNO in Canada and Super-Kamiokande in Japan approximately 25 years ago. Since then, experiments have gained increased reach and precision on the parameters involved. A new generation of experiments, led by the Deep Underground Neutrino Experiment (DUNE) in North America using a liquid argon detector and Hyper-Kamiokande in Japan using water Čerenkov detection, will aim to reach unprecedented sensitivity to CP violation in neutrinos and provide important results on several other parameters and processes. Meanwhile, a handful of other open questions remain related to neutrinos; for example could there be an additional “sterile” neutrino state participating in oscillations or other beyond Standard Model behaviour of neutrinos?

        The Short Baseline Neutrino (SBN) Program at Fermilab near Chicago is an experimental program that will use the data from multiple detectors at a short distance from the neutrino beam origin in seeking to clarify this sterile neutrino possibility. Much like a typical two-detector oscillation measurement where a near and far detector are used to study the beam before and after oscillations, the SBN Program will use two detectors to gain sensitivity to short-baseline oscillatory effects or other beyond Standard Model signatures.

        Specifically, the Short Baseline Near Detector (SBND) and Imaging Cosmic and Rare Underground Signals (ICARUS) detectors are operating at approximately 100 m and 600 m along a neutrino beamline with peak flux between several hundred MeV and approximately 1 GeV. ICARUS also operates approximately six degrees off-axis and 800 m away from a second neutrino beamline at Fermilab. Though significantly smaller than the DUNE far detectors, the SBN Program uses liquid argon detectors and provides key experience en route to DUNE. Likewise, in addition to oscillation searches, these detectors will collect neutrino interaction data on argon targets, enabling key measurements of neutrino interactions that will unlock the next generation of oscillation measurements. This talk will highlight the ICARUS detector, its experimental program, and recent results, highlighting a measurement of neutrino interactions and other analyses.

        Speaker: Bruce Howard (York University & Fermilab)
      • 235
        Chroma: An Open-Source GPU-Based Optical Simulation Tool for Liquid Noble Detectors and Beyond

        Accurate and fast optical photon simulation is critical for optimizing the geometry design of next-generation liquid noble and scintillator-based detectors, where light collection efficiency directly drives energy and position resolution. While standard tools like Geant4 are robust, they are often computationally expensive for high-photon-yield applications. In this talk, we present our development of a workflow based on Chroma, a GPU-based ray-tracing framework that accelerates optical photon propagation by orders of magnitude compared to CPU-based methods. To demonstrate its physical accuracy, we showcase validation results from the Light-only Liquid Xenon (LoLX) experiment. We validate the simulation by comparing its outputs with experimental data, specifically highlighting the energy and position reconstruction of events from an external gamma calibration source. This open-source package aims to provide a user-friendly, scalable, and high-speed simulation solution for future liquid noble detectors or R&D experiments that require detailed optical characterization.

        Speaker: Xiang Li (TRIUMF, Simon Fraser University)
      • 236
        A New Liquid Argon Test Facility for Studies of Noble Liquids and Advanced Photodetectors

        We are commissioning a liquid argon (LAr) cryostat at Queen’s to
        study the properties of noble liquids and photo detectors. It will enable
        local experimental research on xenon-doped liquid argon, a promising tar-
        get medium for next-generation dark matter detectors such as DarkSide-
        LowMass and the Scintillating Bubble Chamber (SBC), which aim to
        probe weakly interacting massive particles (WIMPs) at sub-GeV mass
        scales, and it will allow the characterization of sophisticated silicon pho-
        tomultipliers.

        In pure liquid argon, scintillation light is emitted at 128 nm with a long-
        lived triplet component that enables pulse-shape discrimination. Xenon
        doping shifts the emission to 178 nm on much faster timescales, increasing
        light yield by up to ∼25% and enabling lower detection thresholds, but
        at the cost of reduced pulse-shape discrimination, motivating dedicated
        R&D to understand and optimize this trade-off.

        The first and essential step toward these studies is the commissioning
        of the cryostat and its supporting infrastructure, including the gas han-
        dling, purification, and SiPM-based data acquisition systems. Following
        commissioning, the facility will support studies of radon mitigation using
        zeolite-based adsorption. While activated charcoal is the current industry
        standard, recent results indicate that silver-zeolites have superior adsorp-
        tion coefficient at ambient temperatures. This facility will investigate
        their effectiveness at cryogenic temperatures, compatibility with xenon-
        doped argon, and achievable radiopurity. Furthermore a custom designed
        double-phase Time Projection Chamber (TPC) will allow for the very
        first characterization of ionization signal from xenon-doped argon, as well
        as of triethylamine, trimethylamine and trimethylglycine doped argon at
        kev and sub-kev scale. In addition, the facility will enable the cryogenic
        characterization in both pure and xenon-doped liquid argon of digitally
        controlled silicon photomultipliers (3D-dSiPMs) developed at the Univer-
        sit´e de Sherbrooke. Overall, this program aims to provide critical R&D
        input for noble-liquid dark matter detectors and associated photosensor technologies
        aimed at extending sensitivity to sub-GeV mass scales.

        Speaker: Anantha Padmanabhan
      • 237
        Radon Assay Facility at University of Windsor For nEXO experiment

        Next-generation experiments searching for extremely rare processes, such as neutrinoless double beta decay, require unprecedented control and understanding of radioactive backgrounds. Among these, radon and its progeny represent a dominant and challenging background source due to their mobility and ability to plate out on detector surfaces. The nEXO experiment, a proposed tonne-scale liquid xenon time projection chamber designed to search for neutrinoless double beta decay of 136Xe, demands radon concentrations at or below the micro-becquerel level to achieve its targeted sensitivity. We focus on the isotope 222Rn, whose decay chain includes 214Bi, producing gamma emissions with energies close to the Q-value of the 136Xe double beta decay, making it a critical background for signal sensitivity. This talk will present the radon assay program developed at the University of Windsor in support of the nEXO collaboration. The program focuses on the design, construction, and characterization of high-sensitivity electrostatic chambers (ESCs) for radon emanation measurements. The developed assay infrastructure is intended to screen detector components that come into direct contact with liquid xenon or associated heat-transfer fluids, providing essential input for material selection and background modeling.

        Speaker: Abo-bakr Emara
    • 15:45
      Health Break with Exhibitors | Pause santé avec les exposants
    • (PPD) T3-1 | (PPD)
      • 238
        ATLAS Upgrades for the High Luminosity LHC

        While the on-going Run-3 data-taking campaign will provide twice the integrated proton-proton luminosity currently available at the Large Hadron Collider (LHC), most of the data expected for the full LHC physics program will only be delivered during the High Luminosity LHC phase, currently scheduled to start in 2030. For this, the LHC will undergo an ambitious upgrade program to be able to deliver an instantaneous luminosity of 7.5 × 10^34 cm−2 s−1, allowing the collection of more than 3 ab−1 of data at √s =13.6 (14) TeV. This unprecedented data sample will allow ATLAS to perform several precision measurements to constrain the Standard Model (SM) Theory in yet unexplored phase-spaces, in particular in the Higgs sector, a phase-space only accessible at the LHC. To benefit from such a rich data-sample it is fundamental to upgrade the detector to cope with the challenging experimental conditions that include huge levels of radiation and pile-up events. The ATLAS upgrade comprises the completely novel all-silicon Inner Tracker (ITk) with extended rapidity coverage that will replace the current Inner Detector; a redesigned trigger and data acquisition system for the calorimeters and muon systems allowing the implementation of a free-running readout system. This presentation will describe the ongoing ATLAS detector upgrade status and the main results obtained with the prototypes, giving a synthetic, yet global, view of the whole upgrade project. The focus will be on the Canadian contributions to the tracking and calorimeter upgrades which are in progress.

        Speaker: Christoph Thomas Klein (Carleton University (CA))
      • 239
        Error monitoring software for ITk strips readout at ATLAS

        In 2030, the Large Hadron Collider (LHC) will begin operating at a higher instantaneous luminosity. At the ATLAS detector, this will result in an average of 200 interactions per proton-proton bunch crossing. In order to prepare for high-luminosity, the ATLAS detector must undergo several upgrades, including the replacement of the current particle tracker with the inner tracker (ITk). The new inner tracker will be able to resolve the increased density of charged particle tracks and also withstand the high radiation environment of the high-luminosity LHC. As a result, it will provide valuable event information for the precision measurement of the Standard Model and the search for physics beyond the Standard Model.
        ITk uses two main subsystems to reconstruct charged particle tracks: silicon pixels in the inner central region and silicon strips in the outer central and forward regions. The data acquisition system for the silicon strips must read out over 60 million channels at a rate of 1 MHz, and can be affected by a variety of error conditions. In this talk, I will present a new histogramming tool that can be used to monitor the rate of these errors in real time. This provides valuable information about the performance of the readout system, and can help to quickly identify and resolve problems during system tests, integration, or data-taking. The effectiveness of this monitoring tool is studied with different hardware configurations and preliminary results are presented.

        Speaker: Adrienne Jean Scott (University of Victoria (CA))
      • 240
        Chasing a Rare Higgs Decay: From the LHC to the High Luminosity LHC using Neural Nets

        The rare Higgs boson decay to two muons provides the best opportunity to measure the Higgs boson's coupling to a second generation fermion. The ATLAS collaboration at CERN has recently established evidence for this decay at 3.4 standard deviations ($\sigma$) using data from Run 2 and part of Run 3. Significance is expected to increase as the remaining Run 3 data from the Large Hadron Collider (LHC) is collected and analyzed, but it is not expected to meet the stringent 5.0 $\sigma$ requirement to announce a discovery. Estimates place discovery during the High-Luminosity LHC (HL-LHC) era during which seven times as much data as Runs 2 + 3 combined will be collected. Standard model processes like $H\to\mu\mu$ can be simulated ahead of time using the expected conditions of the future HL-LHC to provide estimates of the experimental reach of such rare physics processes. In particular, upgrades to the ATLAS detector for the HL-LHC era are expected to provide higher-quality information that can be used for classifying $H\to\mu\mu$ events. The analysis of $H\to\mu\mu$ during Run 2 + 3 has so far relied heavily on boosted decision trees to provide background reduction in the signal region, but studies suggest high-information environments could benefit from using neural network discriminators instead. This talk investigates the use of a neural network discriminator in the search for $H\to\mu\mu$ and discovery prospects using simulated HL-LHC data.

        Speaker: Samuel Moir (Carleton University (CA))
      • 241
        DeepSets Machine Learning in FPGA to Improve the ATLAS L0 Global Trigger for HL-LHC

        The ATLAS detector is a general purpose detector at the Large Hadron Collider (LHC) that investigates a variety of physics, ranging from Higgs boson to possible particles that make up of dark matter. The LHC will be upgraded to become High-Luminosity LHC (HL-LHC) at the end of this decade, and in subsequent run periods a high-pileup environment resulting in up to 200 events per proton-proton collision bunch-crossing is expected. A more efficient trigger system in ATLAS is required to identify and calibrate the different physics objects in this high-pileup environment. Previous offline studies has shown that machine learning like GNN and DeepSets performs much better in identifying particle shower types and calibrating energy in the calorimeter compared to the existing architecture in the detector. The possible utilization of the DeepSets machine learning model for this calibration process in the online trigger is now being explored. Our DeepSets calibration model is being optimized to improve energy resolution while minimizing resources usage and latency on the FPGA. This talk will discuss an implementation proposal for its inclusion in the Level-0 (L0) Global trigger in ATLAS.

        Speaker: Chin Chong Leong (University of British Columbia (CA))
    • (DGEP) T3-10 | (DEGP)
    • (DQI) T3-11 | (DIQ)
      • 242
        TERRA: Tensor-network Error-mitigated Robust Randomized Algorithm

        We introduce TERRA (Tensor-network Error-mitigated Robust Randomized Algorithm), a practical and versatile algorithmic framework that unifies tensor-network error mitigation with robust shallow shadows to enable scalable and noise-resilient quantum algorithm development on current quantum devices. We demonstrate TERRA within the recently proposed multi-observable dynamic mode decomposition (MODMD) approach on simulators and IBM superconducting processors. We show efficient spectrum learning for the 1D Fermi–Hubbard models at large scale, achieving improved accuracy relative to standalone MODMD and other available methods. We anticipate that TERRA will serve as a widely applicable algorithmic building block for utility-scale algorithm design, providing a practical pathway toward scalable, noise-resilient computation on near-term devices.

        Speaker: Prof. Cunlu Zhou (Universite de Sherbrooke)
      • 243
        Simulating Qubit Systems with Tensor Network Algorithms

        In the quest for robust quantum computers with large number of qubits one the roadblocks is the predictability of qubit design. Exact diagonalization techniques for the simulation of quantum computing systems can only handle a handful of qubits. We are quickly surpassing this qubit number in both superconducting circuit and other quantum systems. To simulate these systems and predict the effects of noise we must use tensor network algorithms. These simulation techniques will quicken the pace of quantum computing advancement as we will be able to predict which circuits or other constructions will or will not work efficiently or effectively.
        A.N. acknowledges the NSERC CREATE in Quantum Computing Program, grant number 543245. This research was undertaken, in part, thanks to funding from the Canada Research Chairs Program (CRC-2021-00257). This work has been supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) under grants RGPIN-2023-05510 and DGECR-2023-00026.

        Speaker: Anne Najdzionek (Department of Physics & Astronomy, University of Victoria, Victoria, British Columbia V8P 5C2, Canada)
      • 244
        Hybrid Quantum Genetic Algorithm for Stable Hyperparameter Optimization in Flood Prediction Models

        Quantum computing promises new ways to tackle high-dimensional, combinatorial optimization problems that appear throughout scientific modelling. Flood prediction is one such area: modern neural networks can support large-scale, data-driven flood mapping, but their performance is highly sensitive to hyperparameter choices, and repeated tuning is computationally expensive. In this work, we investigate a Hybrid Quantum Genetic Algorithm (HQGA) as a NISQ-era quantum optimization primitive for hyperparameter search in a flood-prediction neural network.

        Our case study targets binary classification of flooded versus non-flooded areas for a simulated event along the Saskatoon River, Canada. HQGA encodes candidate hyperparameter configurations (e.g., multilayer perceptron depth and width, learning rate, dropout, batch size and epochs) in a parameterized quantum circuit, while classical genetic operators perform selection, crossover and mutation. We compare HQGA against two strong classical baselines for hyperparameter optimization—a genetic algorithm (GA) and Bayesian optimization (BO)—under a shared search space and budget. Each method is run for 20 batches of 20 runs (400 HPO runs per method); every selected configuration is then evaluated by training the neural network with multiple initialization seeds and testing on both a held-out test set and an “entire region” evaluation set.

        Across all methods, mean accuracy, F1-score, precision and recall are comparable, showing that the quantum-enhanced optimizer reaches a similar performance regime to the classical baselines. The key quantitative advantage of HQGA is stability: over 400 runs, it exhibits a statistically significant reduction in the variance of evaluation metrics and in the number of generations required to reach near-optimal performance, compared with GA and BO. This indicates that HQGA yields consistently strong hyperparameter settings with fewer poor runs.

        From a broader quantum-computing perspective, our results provide an application-driven benchmark of a hybrid quantum optimizer embedded in a realistic environmental modelling workflow. They show that, even in the NISQ regime and without claiming a formal quantum speedup, quantum-enhanced search can already act as a robust, practically useful component in flood-prediction pipelines, and outline a concrete path for scaling quantum optimization to more complex hydrological and climate-risk models as hardware matures.

        Speaker: Mahkame Salimi Moghadam (University of Calgary)
      • 245
        Variational Imaginary-Time Polynomial Filtering with Ancilla-Based Implementation

        Imaginary-time evolution (ITE) provides a direct route to ground-state preparation by exponentially suppressing excited-state contributions, but practical implementations on quantum hardware are limited by rapidly growing circuit depth and intrinsically low per-step success probabilities. To address these limitations, we develop a variational imaginary-time evolution framework based on polynomial filtering, derived from an operator-level action principle, which yields an optimized non-unitary projector expressed as a finite polynomial in the Hamiltonian. Starting from a single-ancilla, first-order imaginary-time update defined by a Taylor expansion, we show that replacing the Taylor form with a variational polynomial substantially improves both accuracy and stability at larger time steps, leading to up to an order-of-magnitude enhancement in the final success probability. Benchmarks on the transverse-field Ising model demonstrate faster convergence to the ground-state energy and improved robustness compared to standard Taylor-based ITE, highlighting variational polynomial filtering as a practical route to higher-fidelity ground-state preparation on near-term quantum devices.

        Speaker: Dr Bahman Seifi (Department of Physics and Physical Oceanography, Memorial University of Newfoundland and Labrador)
    • (PPD) T3-12 | (PPD)
      • 246
        Extending the physics reach of DEAP-3600

        The DEAP-3600 experiment (Dark matter Experiment using Argon Pulseshape discrimination) at SNOLAB in Sudbury, Ontario is a single-phase liquid argon detector designed to probe for weakly interacting massive particles (WIMPs). The detector consists of 3.3 tonnes of liquid argon housed in a spherical acrylic vessel, which is viewed by 255 photomultiplier tubes (PMTs). Data taken from November 2016 to March 2020 is being used to support a broad physics program ranging from dark matter searches to rare-interaction studies in argon. Ongoing work includes exploring the evidence for high-energy solar neutrino interaction on Ar40, and the impact of detector upgrades. Details and status of those two items will be presented.

        Speaker: Dr Shivam Garg (Carleton University)
      • 247
        HVeV Run 5: The Latest Generation of High-Voltage, Electronvolt-Scale Cryogenic Silicon Calorimeters in the Search for Dark Matter

        As an offshoot of the main SuperCDMS (Cryogenic Dark Matter Search) experiment, smaller instruments dubbed HVeV (high-voltage, electronvolt-scale resolution) detectors have been under continuous development to aid in the search for dark matter at masses below 1 MeV/c², achieving a much finer energy resolution than SuperCDMS. Similar to the SuperCDMS HV detectors, HVeV detectors hold silicon crystals (though scaled down to the order of one gram) that observe nuclear and electron recoils by directing phonon energy into highly sensitive transition-edge sensors, and can amplify signals from electron-hole pairs produced by recoils using applied voltage to induce the Neganov-Trofimov-Luke effect. HVeV Run 5 employed the latest HVeV detectors in a four-month data-taking campaign in 2024 at SNOLAB's CUTE facility, including a ten-day dark matter search, and demonstrated world-class sensitivity at low masses, enabling a new analysis that reaches yet unprobed regions of the dark matter parameter space. This presentation will cover the latest results from analyzing the detectors' performance and the resulting search data.

        Speaker: Mason Buchanan (University of Toronto)
      • 248
        Studies of Anomalous Backgrounds in the PICO Detectors

        The PICO experiment is a leading direct dark matter search that uses superheated Freon to identify interactions between dark matter particles and ordinary matter. These detectors are bubble chambers, where a potential dark matter particle can scatter off a nucleus and impart an amount of localized energy above a threshold. This causes a bubble nucleation in the superheated fluid, most commonly C$_3$F$_8$ in PICO. The resulting phase transition is optically and acoustically recorded, with acoustic sensors providing excellent discrimination between nucleations from nuclear recoils (neutrons and dark matter) and those from alphas. Our next-generation tonne-scale PICO detector, PICO-500, is currently under construction and is scheduled for commissioning at SNOLAB in Ontario, in 2026.

        A significant source of background events arise from wall nucleations (bubbles that form near or on the vessel surfaces) which reduce both detector livetime and exposure. My research focuses on understanding the nucleation mechanisms underlying these wall events, whose origins remain largely unknown. This involves developing and testing theoretical models of bubble nucleation, performing experimental studies, and analyzing existing detector data to identify correlations or potential causes. Our current theories find the higher wall rates could result from a modified Seitz threshold, radioactive isotopes in the detector, or surface defects. Ultimately, this work aims to understand bubble nucleation mechanisms and improve the sensitivity of dark matter searches for PICO-500.

        Speaker: Quinn Malin (University of Alberta)
      • 249
        Radon Mitigation Techniques for PICO-500 Assembly

        Low background techniques have been used in particle physics for a long time, but have surged in the past 30 years. As the detectors get bigger and more sensitive, the need to reduce undesirable backgrounds have become the bottleneck to achieve greater sensitivity. The most common background is alpha particles from radon and its daughters. Simulations have shown that PICO-500, the next generation large-scale dark matter experiment at Snolab, will be limited by exposure to ambient radon if no mitigation is done. In this talk, I will detail the PICO detector design, the impact of radon on low background experiments and the techniques used for the assembly of PICO-500 to mitigate this background.

        Speaker: Jeremy Savoie (Université de Montréal)
      • 250
        Searching for Dark Matter in Mica

        any realistic models for dark matter predict the formation of extremely heavy composite particles in the early universe, with masses well in excess of 10^20 GeV. Today, the fluxes of these particles today would too low to be detectable in direct detection experiments on human timescales. This motivates searches for dark matter signatures in minerals with several-billion-year lifetimes. I discuss prior searches and future prospects for detecting dark matter in muscovite mica.

        Speaker: Andrew Buchanan (Queen's University)
    • (DTP) T3-2 | (DPT)
      • 251
        The problem of time in quantum gravity: a cosmological case

        The problem of time is one of the most profound issues in fundamental physics. It states that all observable are frozen in coordinate time. However, there are several solutions proposed to this issue. I will briefly review the problem, and then present a solution based on the Montevideo Interpretation, a relational approach combined with the so-called evolving constants of motion,in the cosmological case.

        Speaker: Prof. Saeed Rastgoo (University of Alberta)
      • 252
        Causal Structure and Topology Change in Lorentzian Simplicial Gravity

        Abstract:
        The gravitational path integral requires summing over geometries, yet it remains unclear which configurations should be included and which principles should constrain this sum. In particular, the role of causal structure and topology change remains subtle, especially in Lorentzian quantum gravity where amplitudes are intrinsically complex and highly constrained.

        In this talk, I will discuss how these questions can be addressed within simplicial approaches to gravity, where geometry is discretized and causal relations can be implemented explicitly. I will review how Lorentzian Regge calculus and related discrete models provide a controlled setting to study causal structure, singular configurations, and their contribution to the path integral.

        Special emphasis will be placed on configurations with localized causal irregularities, such as conical singularities and point-like defects, and on their relation to spatial topology-change. I will illustrate how such configurations contribute to Lorentzian amplitudes and how they differ from their Euclidean counterparts using low-dimensional examples. I will conclude with their implications for Lorentzian approaches to quantum gravity and open questions for future work.

        Speaker: Seth Asante (University of New Brunswick)
      • 253
        Anomaly in canonical semiclassical gravity

        We show that the canonical formulation of the semiclassical Einstein equation, where the matter terms in the constraints are replaced by expectation values of the corresponding operators in quantum states, is inconsistent due to the nonclosure of the resulting constraint algebra.

        Speaker: Irfan Javed (University of New Brunswick)
      • 254
        Superposed quantum evolutions across chaotic and regular regimes

        While the superposition of quantum evolutions is known to produce interference effects, the interference between evolutions with regular and chaotic classical limits remains largely unexplored. Here, we use a Mach–Zehnder interferometer to investigate the superposition of two quantum evolutions, implemented via post-selection, and to compare it with the corresponding classical mixture. The quantum kicked top provides a natural platform for this study, as its classical dynamics ranges from regular to mixed to fully chaotic depending on the Hamiltonian parameters. We show that when a regular evolution is superposed with a chaotic one, the resulting subsystem entropy can exceed that of the classical mixture, provided the contribution of the chaotic branch dominates in the superposed quantum evolution. We further demonstrate that entropy production in such superpositions is strongly influenced by the structure of the underlying classical phase space. We also show that increased entropy generation can occur for purely regular dynamics at small values of the chaos parameter, given an appropriate choice of post-selection. These results reveal a nontrivial interplay between classical chaos and quantum interference in superposed quantum dynamics.

        Speaker: Amit Anand
      • 255
        Local phase space representation of quantum fields

        We derive a phase-space representation of non-relativistic and relativistic quantum fields by expanding the field operator in terms of a set of localized wave packets. The corresponding probability amplitude depends on mean position and momentum of each wave packet, which serve as phase space coordinates. The dynamical equation of the probability amplitude takes the form of a classical Vlasov equation with quantum corrections. We discuss applications of the method, including Schwinger and Unruh effect, and link the results to the locality problem of relativistic quantum fields.

        Speaker: Karl-Peter Marzlin
    • (DAMOPC) T3-3 | (DPAMPC)
      • 256
        TBD

        TBD

        Speaker: Prof. Erika Janitz (U Calgary)
      • 257
        What can two-body correlations tell us about ultracold atoms?

        Ultracold atoms are a laboratory playground for studying emergent phenomena in many-body physics. The first conceptual step along the path from non-interacting (or mean-field) physics to many-body physics is via two-body interactions and correlations. Here, the diluteness of theses systems is appealing: the separation of scale between inter-atomic distances (typically over 100 nm) and the interaction range (typically less than 5 nm) provide both a strong connection to ab-initio theory and new avenues for control. I will discuss how two-body correlations can be observed, including a newly developed method that uses rapid dimer projection. The I will discuss several recent experiments that use correlations to study emergent interaction symmetry and relaxation dynamics of ultracold fermions. As an outlook, I will discuss some open problems in few-body systems.

        Speaker: Joseph Thywissen (University of Toronto)
      • 258
        SME Constraints from Ground-State Antihydrogen Hyperfine Transitions in ALPHA

        The observed dominance of matter over antimatter motivates precision comparisons of particles and antiparticles, where even extremely small differences could signal physics beyond the Standard Model. Antihydrogen, a positron bound to an antiproton, offers an exceptionally clean atomic system because its transition frequencies can be measured precisely and compared with the corresponding hydrogen transitions. In the ALPHA experiment at CERN, antihydrogen atoms are magnetically trapped and probed using microwave spectroscopy, enabling measurements of ground-state hyperfine transitions in a magnetic field and providing sensitive tests of Charge-Parity-Time (CPT) symmetry and Lorentz invariance.
        We outline an analysis framework based on the Standard-Model Extension (SME), which parametrizes possible CPT- and Lorentz-violating effects as small background couplings to particle properties such as spin. These couplings can produce tiny, orientation- and time-dependent shifts in transition frequencies, including signatures that vary with sidereal time due to Earth’s rotation. Because trapped antihydrogen spectroscopy must be performed in the magnetic trapping field, hyperfine analyses often combine transition frequencies to suppress common sensitivity to magnetic-field variations; however, such combinations can also cancel leading spin-dependent SME contributions. We therefore focus on the SME interpretation of individual ground-state positron spin-flip transitions measured in ALPHA, treating each transition as an independent probe of spin-dependent symmetry-violating effects.
        Using a time series of measured transition frequencies together with magnetic-field diagnostics and experimental geometry, we search for sidereal modulations and other orientation-dependent SME signatures while quantifying dominant systematics such as magnetic-field drifts. The goal is to use existing ALPHA datasets and future measurements to set improved antimatter-based bounds on spin-dependent SME parameters and compare them with matter-based limits.

        Speaker: Pouya Heidari (University of Calgary (CA))
      • 259
        Towards a High Precision Measurement of the Hyperfine Splitting in Antihydrogen using Antiproton Spin Flips

        The lack of antimatter in our universe challenges our understanding of nature at a fundamental level. The Standard Model of particle physics predicts that matter and antimatter should exist in equal proportions in the universe, yet we live in a matter-dominated universe. One motivation for studying antimatter is to test whether subtle differences between matter and antimatter could point to physics beyond the Standard Model. Precise comparisons between the simplest atomic systems of matter and antimatter, hydrogen and antihydrogen, provide especially sensitive tests, as the Standard Model predicts their spectra should be identical. In my research, I am developing a new technique to measure the ground-state hyperfine splitting of antihydrogen, a feature in the spectrum of antihydrogen that can be directly compared with hydrogen. I intend to improve the precision of this measurement by inducing antiproton (the antimatter counterpart to the proton) spin flips, which are insensitive to magnetic field variations near a specific field strength. To this end, I am developing a new microwave injection system compatible with the restrictive geometry of antihydrogen traps, capable of delivering the required microwave frequency to trapped antihydrogen atoms. This system will be integrated into the existing experimental infrastructure at the ALPHA Collaboration at CERN. By enabling more precise measurements of antihydrogen’s ground-state hyperfine splitting, my research will provide precise experimental tests of matter–antimatter symmetry.

        Speaker: Sean Wilson (University of Calgary)
    • (DCMMP) T3-4 | (DPMCM)
      • 17:15
        Discussion period led by Normand Mousseau
    • (DPP) T3-5 Complex Plasmas and Fusion | Plasmas complexes et fusion (DPP)
      • 260
        Microwave imaging Reflectometry for 2-D Plasma Density fluctuation measurement on EAST

        A two-dimensional microwave imaging reflectometry (MIR) system has been
        commissioned on the Experimental Advanced Superconducting Tokamak (EAST) to
        characterize 2-D electron density fluctuations. The diagnostic operates in the W-band
        (75–110 GHz) and employs a 12-channel poloidal receiver array combined with 8
        frequency-tunable sources, providing 96 simultaneous measurement points with full
        radial coverage across the pedestal region. This configuration enables the direct
        measurement of key fluctuation properties, including the poloidal wavenumber,
        wavelength, rotation velocity, and radial correlation length. Systematic research has
        been conducted covering table-top experiments of the MIR system, studies on
        forward modeling methods, development of signal processing algorithms, and field
        debugging on the EAST device. Initial experimental results from the 2024–2025
        campaigns will be presented, focusing on the dynamics of pedestal transport. The
        MIR system substantially advances the diagnostic capability on EAST, offering new
        insights into the underlying mechanisms governing pedestal stability and cross-field
        transport.

        Speaker: Prof. JinLin Xie (University of Science and Technology of China)
      • 261
        Examining the impact of buried layer targets on solid-density plasma formation enabled by bayesian inference applied to X-ray spectroscopy

        Short-pulse, laser-solid interactions provide a unique platform to develop well-characterized laboratory high-energy density (HED) matter conditions to diagnose fundamental properties such as opacity and equations of state. One common method to produce uniform density plasma conditions is through the use of buried layer targets, where a high Z target is embedded within lower Z tamping layers. We compare the plasma conditions produced by irradiating bare copper and copper embedded within aluminum and plastic tamping layers with Colorado State University’s high-contrast, high-intensity (I ∼1021 W/cm2) ALEPH laser. Simultaneous measurements of front-side and rear-side K-shell fluorescence indicates significant bulk plasma heating and the generation of micron-scale, uniformly-heated, solid-density plasmas. We implement a Markov Chain Monte Carlo algorithm to estimate the probability distributions of the plasma temperature and density derived from the collisional-radiative modeling code SCRAM to show that the plasma conditions from bare copper targets are hotter and denser than those from the buried layer targets.

        This work was supported by the U.S. Department of Energy Office of Science, Fusion Energy Sciences and Lawrence Livermore National Lab (LLNS Subcontract B643845, DOE/NNSA DEAC52), the LaserNetUS initiative at Colorado State University (Contract No. DE-SC-0019076 and DE-SC0021246), the NSERC Alliance - Alberta Innovates Advance Program (Agreement No. 212201089 and 222302077), and the Natural Sciences and Engineering Research Council of Canada (grant no. RGPIN-2021-04373). This research was undertaken, in part, thanks to funding from the Canada Research Chairs Program.

        Speaker: Nicholas Beier (University of Alberta)
      • 262
        Acceleration Dynamics of KTX-CTI Compact Torus Plasma

        Compact torus (CT) injection is a promising technique for core fueling in large magnetic confinement fusion devices, where the achievable injection velocity is determined by the CT acceleration dynamics. In this work, the acceleration behavior of CT plasma in the Compact torus injector on KTX device (KTX-CTI) is investigated experimentally and theoretically.
        Time-resolved magnetic probe and fiber-optic interferometer measurements are analyzed using instantaneous frequency analysis (IFA) and equivalent circuit modeling to characterize the CT motion during acceleration. The results show that the CT acceleration process cannot be adequately described by conventional model that treat the CT as a dimensionless current sheet. Instead, the CT occupies the acceleration channel and exhibits a continuously increasing effective mass as it propagates downstream.
        To capture this behavior, a variable-mass point model is introduced and applied to the KTX-CTI system. The model successfully reproduces the measured acceleration history and final CT velocity, demonstrating significantly improved agreement with experimental observations compared to simplified single-mass formulations. The results indicate that plasma entrainment and magnetic structure evolution play essential roles in governing the CT acceleration process.
        These findings provide new insight into the physical mechanisms underlying CT acceleration and establish a more realistic modeling framework for compact torus injectors. The results are directly relevant to the optimization of high-velocity CT injection systems for core fueling applications in future fusion devices.
        [1]. Chen Chen, Tao Lan et al Plasma Sci. Technol. 24, 045102 (2022).
        [2]. Tao Lan et al Plasma Sci. Technol. 26, 105102 (2024).
        [3]. Qilong Dong, Tao Lan et al Plasma Phys. Control. Fusion 67, 085004 (2025).

        Speaker: Tao Lan (University of Science and Technology of China)
      • 263
        Fusion PFC Testing using Plasma Immersion Ion Implantation

        Plasma Immersion Ion Implantation (PIII) is a high-fluence ion implantation technique well-suited for implanting large-area and non-planar targets. For these reasons it has found application in semiconductor wafer processing as well as nitridation of alloys and surface treatment of biomedical implants. A newer application of PIII has been using it to study the effects of particle radiation damage on tungsten plasma-facing components (PFCs) intended for plasma fusion applications. In this talk I will review some of our recent experimental work on the subject, including the use of synchrotron-based techniques for damage characterization.

        Speaker: Michael Bradley
      • 264
        Hydrodynamic Tuning of Interfacial Instabilities: Pressure-Controlled Bubble Dynamics at the Heptane-Water Interface

        Electrical discharges generated at the interface of immiscible liquids (e.g., oil-water) drive rapid emulsification through the formation of cavitation bubbles and re-entrant liquid jets. While the electrical parameters (voltage, pulse width) are commonly used to modulate this interaction, the role of hydrodynamic confinement remains unexplored in organic-aqueous systems. This work presents an experimental investigation into the dynamics of nanosecond pulsed discharges at the heptane-water interface under variable ambient pressure ($P_{\infty} = 10 - 101$ kPa) and applied voltage. Using synchronized high-speed shadowgraphy (up to 100 kfps) and electrical diagnostics, we characterize the complete life cycle of the plasma-induced bubble. We demonstrate that ambient pressure acts as a critical tuning parameter for the dimensionless standoff distance ($\gamma = d/R_{max}$). Reducing $P_{\infty}$ significantly enhances the maximum bubble radius ($R_{max}$), effectively forcing a transition from stable oscillation to violent inertial jetting without altering the injected electrical energy. We analyze the trade-off between the increased jet penetration depth at low pressures and the reduced collapse intensity (shockwave amplitude) resulting from the lower driving pressure gradient ($\Delta P = P_{\infty} - P_v$). Experimental radius-time curves are validated against the Keller-Miksis formulation to quantify the thermodynamic efficiency of the discharge. The results indicate that optimizing $P_{\infty}$ allows for the control of emulsification regimes—balancing droplet size distribution against mixing depth—offering a novel, non-intrusive control method for plasma-liquid processing applications.

        Speaker: Mohamed G. Elsheikh (Department of Physics, Université de Montréal, Montreal, Canada)
    • (DNP) T3-6 Hadrons-II | Hadrons-II (DPN) T3-6
      • 265
        The Barel Imaging Calorimeter for the future EIC facility

        The Electron-Ion Collider (EIC) is a next-generation facility designed to study the quark and gluon structure of nucleons and nuclei. Meeting its physics goals requires calorimetry systems with excellent particle identification and high-resolution electromagnetic measurements. The Barrel Imaging Calorimeter (BIC) of the ePIC detector addresses these needs by providing excellent electron identification in the presence of background pions, together with precise energy and position measurements of photons for key processes such as Deeply Virtual Compton Scattering and neutral pion reconstruction. The BIC is being developed through an international collaboration involving institutions in the United States, Canada, Korea, and Germany.

        In this talk, I will present an overview of the BIC concept, which integrates a high-resolution scintillating-fiber–lead sampling calorimeter with low-power AstroPix monolithic active pixel sensors for imaging. I will summarize the physics-driven design choices, the expected performance based on simulation studies benchmarked to EIC requirements, and the current status of component integration and testing. Highlights from recent beam tests will be shown, along with an outlook on the development roadmap for this critical subsystem of the ePIC detector.

        Speaker: Maria Zurek
      • 266
        Exploring Hadronic Structure: Precision Rosenbluth Separation for the Pion Form Factor at Jefferson Lab

        One of the central challenges in modern physics is to unravel hadronic structure, in particular how the observed properties of hadrons (i.e. mass and spin) emerge from the underlying dynamics of quarks and gluons governed by Quantum Chromodynamics (QCD). The pion ($\pi$-meson) is the lightest quark-bound state and provides a particularly sensitive probe of quark confinement, since its structure is directly connected to these fundamental QCD mechanisms. The pion electromagnetic form factor (F_{\pi}) is a key observable that encodes information about the internal structure of the pion. It can be accessed through exclusive pion electro-production in the reaction $p(e,e'\pi^+)n$, where the measured cross-sections depend on the polarization of the exchanged virtual photon. The Pion-LT experiment at the Thomas Jefferson National Accelerator Facility (JLab) in Newport News, Virginia, was designed to measure $F_{\pi}$ at high $Q^2$ over a broad kinematic range. The experiment uses a unique Rosenbluth longitudinal–transverse (LT) separation technique to determine the longitudinal and transverse cross-sections, $\sigma_L$ and $\sigma_T$, with high precision. The precision of the cross-section separation depends on the accurate determination of small systematic uncertainties, since $F_{\pi}$ is extracted from $\sigma_L$. In this talk, I will present preliminary results for LT-separated cross-sections at $Q^2$ = 3.85 $GeV^2$ measured using the Rosenbluth technique, on behalf of the PionLT Collaboration.

        Speaker: Mr Muhammad Junaid (University of Regina)
      • 267
        GPD Factorization in Pion Electroproduction: PionLT

        Generalized Parton Distributions (GPDs) are a huge advancement in our understanding of hadronic structure and non-perturbative QCD. To study GPDs, one may use the Deep Exclusive Meson Production (DEMP) reaction, but first one must find the Q^2 regime where DEMP is factorizable. The factorization regime is where the cross-section can be divided into two parts, a hard part calculated with pQCD, and a soft part parameterized by the GPDs. Theory predicts factorization will occur at "sufficiently high" Q^2. This presentation will discuss the current status of the PionLT experiment at Jefferson Lab to determine the onset of factorizability for the exclusive pion electro-production reaction. To determine factorizability we must perform a LT separation on the data, which divides the cross-section into components based on the virtual-photon polarization. The PionLT experiment uses the Rosenbluth technique to preform LT separations. If factorization is confirmed, one can extract GPD information from this same separated data.

        Speaker: Nathan Heinrich (University of Regina)
    • (DPE) T3-7 | (DEP)
      • 268
        What makes an effective mentorship program for early career researchers? A case study of the newly designed program at TRIUMF

        Strong mentorship has been shown to increase mentees’ sense of STEM identity, improve student retention, and push mentees to tackle new challenges that promote professional growth. To provide equitable access to the benefits of mentorship, it is essential that we create structured and sustainable support for young scientists through mentorship programs tailored to their needs. However, minimal literature is available to guide the design of effective mentorship programs serving graduate students and postdoctoral scholars. In this talk, we will present preliminary results from our study into effective mentorship program design. Our research follows the TRIUMF Early Career Researchers Mentorship Program as a case study, which is a custom program serving undergraduates, graduate students, and postdocs at TRIUMF. The program also strives to make clear the benefits that mentors receive from participating. Both academic (focus on career progression) and peer (focus on new research environment acclimatization) mentorship program streams will be examined. Preliminary results from program surveys designed to ascertain both program impact and structural success will be presented. We will also provide recommendations for designing and running a successful mentorship program, based on participant feedback.

        Speakers: Dr Benjamin Davis-Purcell (TRIUMF), Gabby Gelinas (University of Calgary)
      • 269
        Neurodivergence in Physics Community: Invisible Labour, Barriers, and Effective Support

        Physics courses place heavy demands on working memory, sensory regulation, and rapid collaboration - especially in large first-year lectures and noisy shared labs. This in-progress study translates two evidence streams into physics-ready practice: (1) a systematic scan of accessibility-related information from Atlantic Canadian universities, and (2) a scoping literature review on autistic students’ barriers and effective support in higher education. The goal is to reduce the invisible labour currently shouldered by neurodivergent students and equip frontline instructors with realistic supportive tools, while offering departments a catalyst for sustainable and long-term change in operational policies. Our approach begins with reviewing institutional webpages, policies, strategic plans, and student-service handbooks against a rubric informed by Universal Design for Learning (UDL) checkpoints, disability-inclusion benchmarks, and the CCWESTT Gender Equality in SETT Report Card. We synthesize peer-reviewed findings most relevant to neurodivergence in the physics community - which discuss options for lowering cognitive load and supportive practices without diluting rigor, and a discipline-level framework situating UDL in postsecondary physics curricula. We also incorporate lab-specific guidance in tool flexibility and explicit planning for sensory/physical barriers. Finally, we present “autistic burnout” not as inevitable, but preventable: chronic exhaustion and reduced tolerance after prolonged lack of support can be mitigated. We clarify why predictable course architecture and transparent communication matter in physics contexts and offer insights into TA training and group-work structures. The intended outcome is a physics culture that treats autism-informed, UDL-aligned teaching as a hallmark of instructional excellence.

        Speaker: Ms Day MacKay (Saint Mary's University)
      • 270
        Promoting Successful Outcomes in Undergraduate Programs at The University of Winnipeg to Encourage Indigenous Students to Undertake Graduate Studies in Science and Engineering

        With one of the largest per capita populations of Indigenous undergraduate students, The University of Winnipeg (UW) is strategically positioned to support Indigenous science students throughout their academic careers, including pathways to graduate school. With funding from the Robbins-Ollivier Award for Excellence in Equity, NSERC Promoscience, and UW, the university runs a suite of programs to reduce barriers and promote participation and success among Indigenous students in NSE. These programs include:
        1. A four-week program for early undergraduate students to bolster confidence in their science skills, gain research experience under the mentorship of a faculty member, and to develop an academic community;
        2. A chapter of the Canadian Indigenous Science and Engineering Society, which provides a consistent, year-round community presence on campus.
        3. Increased access to NSERC USRAs for senior undergraduate students, which provides summer research opportunities; and
        4. A twelve-week program run through Graduate Studies offering a variety of applied research projects, workshops designed to build academic and research skills, and social and cultural activities as students transition from undergraduate to graduate studies.
        The development and implementation of two of the programs are currently led by steering committees to ensure there is input and feedback from Indigenous staff, faculty, students, and program alumni. Informal successes include increased participation rates across all programs, success of student graduates, recognition through program awards, and UW receiving double the allocated quota number of USRA recipients on campus. Additional outcomes of success of these programs include increased faculty participation, and changes in protocols for student research awards.
        Difficulties for Indigenous students remain and programs need to work actively not to perpetuate barriers for students within these programs. These programs have inconsistently received guidance from Indigenous staff, faculty and students. The stipend amount for students in the programs has sometimes been an obstacle and the previous lack of Indigenous identity verification may also have created barriers for Indigenous students being supported by the program.

        The authors wish to acknowledge funding from NSERC, Tri-Agency Canada Research Chairs Program, and UW.

        Speaker: Melanie Martin
      • 271
        Doing the Work: Advancing Equity, Diversity, and Inclusion at SNOLAB

        Large-scale physics experiments rely on diverse, interdisciplinary teams and long-term sustainability, making equity, diversity, and inclusion (EDI) essential components of scientific excellence. SNOLAB has undertaken a coordinated effort to strengthen EDI across its user community, with particular attention to student and early-career researcher engagement, collaboration-level initiatives, and locally driven projects within the laboratory environment.
        This contribution presents an overview of SNOLAB’s current EDI activities, including the collection and use of demographic information to better understand the composition of its trainee population, the implementation of EDI plans developed by scientific collaborations, and the support of small-scale, targeted projects aimed at improving community well-being and participation. Emphasis is placed on practical approaches to embedding EDI principles into existing research structures, rather than treating them as parallel or auxiliary efforts.
        Lessons learned from these initiatives are discussed, including challenges in data collection, communication across a diverse user base, support for developing EDI advancement plans, and balancing consistency with flexibility among collaborations of varying sizes. Planned next steps are also outlined, highlighting mechanisms for assessment, accountability, and continued community involvement. By sharing its recent experiences and forward-looking strategies, SNOLAB aims to contribute to broader discussions within the physics community and to support the development of inclusive practices at research facilities worldwide.

        Speaker: Erica Caden (SNOLAB)
    • (DPMB) T3-8 | (DPMB)
      • 17:30
        Discussion
    • (DAPI) T3-9 | (DPAI)
      • 272
        Positioning Error Effects on Ptychographic Reconstruction

        Ptychography is a scanning coherent diffraction imaging technique that takes far-field diffraction patterns measured at well-known, predefined scan points and reconstructs the two-dimensional complex X-ray transmission function. This technique has been used to image a wide range of technologically important samples, ranging from micrometeorites to x-ray optics, with reported spatial resolutions as high as 17 nm. Ptychography depends on three main components: coherent X-ray sources, efficient phase retrieval algorithms, and precise nano-positioning of samples. An investigation into the impact of nano-positioning errors on ptychographic reconstructions has not been conducted. We will systematically examine the effects of these errors in two distinct scenarios: (1) when the predefined scan points are hit accurately, but the recorded positions are erroneous, and (2) when the predefined scan points are missed, but the actual scan point positions are accurately known. With the analysis of these two cases, we intend to enhance the understanding of how ptychographic reconstructions are affected by positioning errors.

        Speaker: Nicholas Simonson (University of Saskatchewan)
      • 273
        Uncovering the nature of closure phases

        We present a fully analytic study of fork shaped a jumps seen in
        mm-VLBI closure phases and derive the first closed form expressions that connect these bifurcations to intrinsic source asymmetries.
        For a two-component (dipolar) brightness model we prove: (i) a necessary and sufficient branch cut invariant that links any integer change in the closure phase property to an intervening visibility null; (ii) a jump corollary and a local antisymmetry relation that together explain the characteristic fork morphology. These results provide a calibration independent toolkit for detecting visibility nulls, quantifying low order source asymmetries, and estimating achievable precision without Monte Carlo simulation.

        Speaker: Shokoufe Faraji
      • 274
        Nanomembrane -based Microfluidic Platform with Embedded Electrical Pressure Transducer for On-Chip Nanoparticle Quantification

        Accurate quantification of nanoparticle concentration is important in a host of fields, particularly in nanomedicine, electronics, and catalysis. Microfluidic systems present an opportunity to develop low-cost tests for nanoparticle quantification but often suffer technical challenges related to small sample volumes and optical interference from materials used to construct the device. Here we introduce a microfluidic device that integrates an ultrathin silicon nitride nanoporous membrane (nanomembrane) with an on-chip pressure transducer, designed to precisely quantify nanoparticle concentrations within a microfluidic device using an electrical readout for quantification. As nanoparticles are captured by the nanomembrane under pressure-driven flow, the pressure-differential across it changes and is measured by an on-chip transducer. The pressure transducer utilizes a thin PDMS membrane that deflects under pressure to change the cross-section and ionic flow resistance of an adjacent channel, which is measured using a pair of Ag/AgCl electrodes. This enables the determination of nano-particle concentration by analysis of the kinetics of trans-membrane pressure changes relative to particle blockage of the nanomembrane. We also propose a statistical model of nanomembrane fouling, which accounts for distributions in pore and particle sizes as well as the variety of blocking mechanisms and their interactions. This model provides a more detailed understanding of nanoparticle filtration behavior and the kinetics of nanopore blocking, enabling accurate concentration determination when used as a predictive model. Experimental validation of the model using the data acquired by the microfluidic device demonstrates a lower limit of detection on the order of 108 particles/mL, offering a versatile, non-optical approach for the in-situ quantification of nanoparticles in a microfluidic device.

        Speaker: Zachary Morris (University of Ottawa)
      • 275
        Using phase interference to characterize dynamic properties – portable NMR rheometry and elastometry

        Dynamic properties are a response of a sample to an applied dynamic stress. Changes in tissue viscoelasticity can be an indication of serious health issues; the flow induced in response to shear stress tells us about fluid viscosity.
        Nuclear Magnetic Resonance (NMR) signal can be sensitized to measure displacements; its spatially-resolved NMR, also known as Magnetic Resonance Imaging (MRI), can provide a spatial distribution of stresses. However, conventional NMR and MRI instruments are big, expensive, and can only be used in a specially adapted laboratory. Portable NMR instruments are small, portable, inexpensive, and can be used virtually anywhere. The NMR signal comes from a “sensitive volume” outside the magnet array, greatly expanding the range of potential applications.
        A well-controlled sensitive volume can be used as an integrator to measure dynamic properties, turning a portable NMR device into a versatile sensor. Its combination with a shear wave actuator enables portable NMR elastometry (see Fig. for optics- vs NMR-measured shear wave speeds). A NMR-measured response to shear stress can yield the flow index. A time-resolved detection of the transition time for a spin-up cylinder permits a completely non-invasive measurement of fluid viscosity. All our devices are homebuilt and can be assembled in a physics laboratory.

        W. Selby, V. Belzile, J. Marshall, I. Mastikhin; Completely noninvasive viscosity characterization using a portable magnetic resonance sensor. Physics of Fluids 1 July 2025; 37 (7): 073119.
        William Selby, Phil Garland, Igor Mastikhin. Transient shear wave elastometry using a portable magnetic resonance sensor. Magnetic resonance in Medicine (2025) 94(1):373-385.
        Selby, W., Garland, P., and Mastikhin, I. "A Simple Portable Magnetic Resonance Technique for Characterizing Circular Couette Flow of Non-Newtonian Fluids", Journal of Magnetic Resonance (2022) 345, 107325.
        Selby, W., Garland, P., and Mastikhin, I. "Dynamic Mechanical Analysis with Portable NMR", Journal of Magnetic Resonance (2022) 339, 107211

        Speaker: Igor Mastikhin (University of New Brunswick)
      • 276
        Estimating absolute optical properties in turbid media using single-distance broadband continuous-wave spectroscopy

        Light interaction with turbid media has significant biomedical and industrial applications, including non-invasive hemodynamic monitoring and food processing. Light absorption reveals concentrations of chromophores, while scattering informs on the geometry and distribution of microstructures. Broadband continuous-wave spectroscopy (bCWS) is a simple, cost-effective approach for measuring optical attenuation over a broad spectral range. However, due to its limited dimensions of information content, disentangling the contributions of absorption and scattering from the overall attenuated signal remains a challenge. In contrast, more information-rich techniques like time-resolved spectroscopy (TRS) can reliably separate absorption and scattering; however, TRS is complex, costly, and slower than bCWS. We previously developed an algorithm based on the diffusion approximation for modelling single-distance bCWS measurements using chromophore absorption coefficients and Mie theory, which was further refined and validated in this study for estimating absolute optical properties. A series of phantom experiments was conducted, simultaneously acquiring bCWS (wavelengths: 680-920 nm) and TRS (wavelengths: 760, 808, 850, 915 nm) measurements for comparison. First, Intralipid phantoms were prepared by mixing distilled water and increasing concentrations of Intralipid, thereby increasing light scattering while absorption remained constant. Next, an Intralipid phantom was prepared with a buffer solution, and whole animal blood (Hb) and baker’s yeast (CCO) were subsequently added in two separate steps to change absorption. In the first set of phantoms, scattering increased linearly at each step, while absorption remained constant at that of pure water. In the second set of phantoms, scattering increased at each step, while absorption increased from that of pure water to the summed absorption of pure water, Hb, and CCO. The bCWS-estimated optical values demonstrated high agreement with TRS. These findings demonstrate that the algorithm reliably estimates optical properties from bCWS measurements at a single distance, providing an accessible approach for assessing optical properties of turbid media.

        Speaker: Rasa Eskandari (Western University)
      • 277
        MC-MICAP-MS: A New Era of Stable Isotope Mass Spectrometry

        The relative abundances of an element’s stable isotopes can change due to biological, physical, and chemical mass dependent processes in living systems and the environment. In a biological system, shifts in an element’s isotopic composition may indicate changes in its regulation due to disease, or exposure to toxic levels of the element or its parent isotopes. These changes in isotopic composition are subtle, and detecting differences in relative abundance requires precision measurement capabilities. The multiple collector mass spectrometer coupled to an inductively coupled argon plasma ion source (MC-ICP-MS) is the established method for high precision isotope abundance measurements. However, for some elements such as calcium and iron, isobaric interferences with argon-based polyatomic ions severely limit sensitivity and precision. A newly introduced instrument from the University of Calgary is enabling precision measurement of such isotopes with a multiple collector plasma mass spectrometer by replacing the argon plasma ion source with a microwave inductively coupled atmospheric pressure plasma (MICAP) source. The MICAP ion source uses a nitrogen-sustained plasma, eliminating argon-based interferences. In this talk, I will describe this new instrument and demonstrate how the MICAP minimizes the limitations faced by conventional argon MC-ICP-MS. Specifically, I will showcase the effectiveness of this novel instrument using examples of zinc and calcium isotopes and explore the new applications possible in the study of cycling of trace metals in biological systems.

        Speaker: Gabby Gelinas (University of Calgary)
    • 17:45
      Travel Time | Déplacement
    • DAMOPC Poster Session & Student Poster Competition | Session d'affiches DPAMPC et concours d'affiches étudiantes
      • 278
        A Large-Scale Programmable Trapped-Ion Quantum Simulator

        Quantum simulators are platforms used to simulate classically intractable quantum systems. The ability to select an initial state, engineer interactions and select a measurement scheme enables a programmatic way to study quantum effects. Trapped atomic ions provide a versatile quantum simulation testbed due to their native all-to-all interactions, long coherence times and simple qubit state manipulation schemes. We present a large-scale quantum simulator, engineered to hold over 30 171Yb+ ion-qubits for both analog and digital quantum processing. The system incorporates an extreme high vacuum chamber (XHV), site selective qubit addressing, in situ mid-circuit measurement and reset, and high numerical aperture (NA) imaging for rapid state detection. We measure a local pressure of (3.9 ± 0.3) × 10−12 mbar using the ions in the room temperature system [1], corresponding to a long average interval of (1.9 ± 0.1) hrs/ion for collisions with background atoms, suitable for large scale quantum simulation experiments. Furthermore, a dual acousto-optical deflector (AOD) configuration has been implemented for site-selective addressing, as our means to implement single and multi-qubit logical operations. Using Fourier holography, a pristine beam is engineered to address a single ion with low-crosstalk to measure and reset a qubit state during a circuit [2]. This platform is intended to enable a wide range of quantum simulation experiments, including investigations of driven-dissipative quantum dynamics and measurement-driven quantum phase transitions.

        [1] L. Hahn et al., arXiv:2512.11794, (2025).
        [2] S. Mahato et al., arXiv:2512.13882, (2025).

        Speaker: Fabien Lefebvre (University of Waterloo)
      • 279
        In-situ Waveform Sampling in a Reaction Microscope

        Abstract: We introduce a novel in-situ method that utilizes momentum microscopy via single-atom ionization to capture the electric field of a femtosecond laser pulse.
        Studying ultrafast, strong-field-driven phenomena in atoms and molecules is central to attosecond science, enabling direct access to electron dynamics on sub-cycle time scales. Reaction Microscopy, also known as Cold Target Recoil-Ion Momentum Spectroscopy (COLTRIMS), provides coincidence three-dimensional momentum measurements of photoelectrons and ions and has become a cornerstone technique for probing ultrafast processes. Since many field-driven processes evolve on a sub-cycle time scale, the complete characterization of the ionizing pulse is a critical aspect of many ultrafast photoionization experiments. Near infrared femtosecond laser technology has been the main driver of ultrafast photoionization science for the past two decades. However, most currently available near-infrared pulse characterization techniques that rely on nonlinear optical effects (e.g., FROG, SPIDER, D-scan) are primarily sensitive to the intensity envelope and do not provide information about the sub-cycle temporal evolution of the pulse. Strong-field ionization-based approaches offer direct sensitivity to the electric field waveform but are typically implemented ex situ and may require dedicated vacuum systems or auxiliary sampling pulses.
        Tunneling Ionization with a Perturbation for the Time-domain Observation of an Electric Field (TIPTOE) enables sub-cycle sampling of arbitrary light waveforms in gases and solids. Here, we extend TIPTOE to the single-atom ionization regime inside a COLTRIMS spectrometer, enabling true in-situ waveform characterization. In our setup an intense, near-infrared femtosecond laser pulse is split into an intense, ionizing pulse and a weak, but otherwise identical, sampling pulse. The delay-dependent modulation of the ionization yield directly maps to the electric field of the sampling pulse. In addition, the photoelectron momentum distribution also provides direct access to the vector potential of the laser pulse. In summary we present a novel in-situ technique that combines COLTRIMS and TIPTOE to sample electric field waveforms via momentum microscopy using single-atom ionization. This approach offers a new level of control for studies of strong-field dynamics using photoionization coincidence momentum spectroscopy.

        Speaker: pooya Ghavami (Joint Attosecond Science Laboratory, National Research Council of Canada and University of Ottawa, Ottawa, Canada)
      • 280
        Low-loss nonreciprocal phase-shifting element for cavity applications

        Optical nonreciprocity refers to the phenomenon where light in a system behaves differently depending on its direction of propagation. We aim to create a low-loss nonreciprocal phase-shifting element which has different optical path length depending on propagation direction. When placed in a ring cavity, the asymmetry would create a difference between the optical path lengths for the counterpropagating waves, resulting in different resonance conditions for counterpropagating waves inside the cavity. Thus the cavity can support two distinct frequencies of light counterpropagating while both are on resonance inside the cavity. The interference of two counterpropagating waves at different frequencies will create a moving optical lattice, which can be used to trap and transport atoms. We present multiple proposed designs with different materials and mechanisms to achieve nonreciprocity, and a theoretical comparison between their performances. These mechanisms include combinations of the Faraday effect, stress-induced birefringence, Brewster’s angle, and anti-reflective coatings. We measured relevant optical properties of these proposed materials such as fused silica and terbium gallium garnet. We present prototypes of the nonreciprocal element as well as our investigations of their performance, such as optical loss, stability and tunability. Finally, we discuss the advantages, limitations, and practical challenges of each approach in creating the low-loss nonreciprocal phase-shifting element.

        Speaker: Jannet Zang (University of Toronto)
      • 281
        High-Harmonic Generation in a Solid by Quantum States of Light

        We present a theoretical framework to investigate high-harmonic generation (HHG) driven by Squeezed Coherent Thermal States (SCTS) of light. The study is motivated by the complementary roles played by squeezing and thermal fluctuations: while squeezing enhances quantum correlations, thermal photons introduce  decoherence and noise, and their coexistence in a coherent background provides a rich platform to explore competing quantum effects in nonlinear light–matter interactions.

        The system is modeled as a two-level system driven by a quantized electromagnetic field, treating both the driving field and the emitter fully quantum mechanically. The initial field state is prepared in a SCTS, allowing continuous interplay between the coherent, squeezed vacuum, and thermal driving within a unified description. High-harmonic spectra are formulated in terms of the dipole autocorrelation function and evaluated using exact diagonalization and spectral decomposition, without invoking semiclassical, Floquet, or phase-averaging approximations.

        This study establishes a general and flexible framework to systematically examine how coherence, squeezing, and thermal noise jointly influence quantum HHG processes. The study aims to provide insight into the interplay between nonclassical photon statistics and strong light–matter coupling, with direct relevance to cavity and circuit quantum electrodynamics platforms.

        Speaker: Kous Mandal (University of Windsor)
      • 282
        Quantum Defect Extrapolations for the Rydberg P-States of Helium and Comparison with Experiment up to n = 102

        Recent advances in theoretical techniques allow high precision calculations for the high-lying Rydberg P-states of helium up to principal quantum number n = 35 [1]. The present work develops extrapolation techniques based on the Ritz quantum defect method for the nonrelativistic energy, and 1/n expansions for the relativistic and quantum electrodynamic corrections which otherwise violate the Ritz expansion ansatz that only even powers of 1/n contribute. The resulting ionization energies of the P-states are accurate to better than $\pm$1 kHz in the range n > 35. Comparison with experimental transition frequencies up to n = 102 [2] yield ionization energies for the $1s2s\;^3S_1$ state. The results confirm a 9$\sigma$ discrepancy between theory [3] and experiment for the ionization energy of the $1s2s\;^3S_1$ state.

        [1] G.W.F. Drake, A.T. Bondy, O.P. Hallett and B.C. Najem, Phys. Rev. A 113, 012810 (2026).
        [2] G. Clausen et al. Phys. Rev. A 111, 012817 (2025).
        [3] V. Patkos V.A. Yerokhin and K. Pachucki, Phys. Rev. A 103, 042809 (2021).

        Speakers: Mr Oliver Hallett (Univerity of Windor), Ms Titamarie Maggio (University of Windsor), Mr Benjamin Najem (University of Windsor)
      • 283
        Development of a Field-Deployable Laser System for Canada’s First Portable Quantum Gravimeter

        We present progress toward the development of Canada’s first portable quantum gravimeter at the University of New Brunswick, which is designed to detect time-variable gravity anomalies such as solid-earth tides and ocean loading phenomena. This work focuses on the implementation of a field-deployable laser system for cooling and manipulating a gas of rubidium-87 atoms at the heart of the gravimeter. The laser system involves three space-qualified fiber lasers: a Reference laser locked to an electronic transition in 87Rb, and two Follower lasers locked to the Reference using high-bandwidth optical phase-locked loops. The two Followers are amplified to high power and injected into the sensor head to facilitate gravity measurements. Similar all-fibered designs have demonstrated resilience against external thermal and vibrational noise, helping to ensure reliable operation in the field. Our laser system bridges the gap between cold-atom physics and practical geophysical applications, providing a stable foundation for high-sensitivity gravitational mapping.

        Speaker: Owen Doty
      • 284
        Fe:ZnSe MIR Amplifier pumped by 0.4-J laser

        Strong field science rapidly grew in the 1990s with the emergence of Ti:Sapphire based chirped pulse amplification (CPA) systems operating at 0.8 µm wavelength. Although many strong-field phenomena benefit from longer laser wavelengths [1, 2], to date this field has been largely restricted to near infrared range of 0.8 to 1 µm. The Fe:ZnSe-based CPA offers an efficient approach for reaching high-peak-power at 4 μm [3]. Our group established a Fe:ZnSe 12-pass CPA laser pumped by two 34-mJ, 120-μs Er:YAG lasers. We demonstrated 250-fs, 4.95-mJ pulses at a center wavelength of 4.07 μm [4]. To increase the output peak power of this system, we are establishing a Fe:ZnSe amplifier pumped by a high-energy laser diodes pumped Er:YAG laser. Here we report the measurements of the time-resolved small signal gain (SSG) of Fe:ZnSe crystals, with a goal of terawatt (TW) mid-infrared CPA.
        The schematic of the measurement setup is shown in Fig.1. The updated high-power Fe:ZnSe amplifier is pumped by a diode-pumped Er:YAG laser. We used two 8×8×10 mm3 Fe:ZnSe crystals with the doping concentration of 5ⅹ1018 cm-3. The crystals were mounted in a vacuum chamber, which were cooled to 42 - 90 K to increase the upper-level lifetime and suppress thermal lensing.
        The single-pass small signal was measured by using a 4-μm CW laser diode. The results are shown in Fig.2. The time-resolved gain was measured at different pump energies. Fig.2(a) shows the gain at approximately 70-mJ pump energy. The measurement shows a gain of over 23. When increasing the pump energy to 280 mJ, the gain increases to ~150, as shown in Fig.2(b). To the best of our knowledge, this is the highest gain ever reported for an Fe:ZnSe amplifier.
        In summary, we obtained a small signal gain of 150 at 280-mJ pump energy when the Fe:ZnSe was cooled to 42 K. We are currently developing a multi-stage, multi-pass Fe:ZnSe CPA laser. The first stage amplifier is expected to deliver >30-mJ pulses at 100 Hz, which will be further amplified to TW for attosecond science and other strong field physics experiments.

        Speaker: Fei Xu (University of Ottawa)
      • 285
        A High-Power Picosecond Source For Pumping Mid-Infrared Optical Parametric Amplifiers

        This work investigates the power-scaling potential of a holmium-doped yttrium lithium fluoride (Ho:YLF) chirped pulse amplifier (CPA) for pumping a mid-infrared zinc germanium phosphide (ZGP) optical parametric chirped pulse amplifier (OPCPA). The demonstrated double-pass Ho:YLF CPA delivers 2.05 μm pulses with 6 mJ pulse energy, 8 ps duration, and 5 kHz repetition rate, enabling OPCPA outputs of 400 μJ at 2.5 – 5 μm and compressible to few-cycle pulse durations. This mid-infrared source has been successfully used to seed an Fe:ZnSe CPA and is being scaled toward few-millijoule pulse energies to support attosecond pulse generation via high harmonic generation (HHG).
        Few-cycle, few-millijoule mid-infrared laser sources are critical for applications in attosecond science, spectroscopy, terahertz and frequency-comb generation, surgery, and remote chemical sensing. In particular, extending HHG into the soft X-ray and water-window spectral regions (280–530 eV) requires longer-wavelength driving lasers due to the quadratic scaling of the cutoff photon energy with wavelength. The ZGP OPCPA presented here generates an octave-spanning spectrum from 2.5 to 4.8 μm, offering a pathway toward keV-level HHG photon energies. ZGP is well suited for this application due to its large effective nonlinear coefficient (75 pm/V), broad transparency window (2–12 μm), and high conversion efficiency. Pumping ZGP above 2 μm minimizes absorption, making the 2.05 μm Ho:YLF CPA an optimal choice.
        Compared to earlier Ti:sapphire-driven systems limited to 1 kHz operation, the present architecture employs a commercial Yb:KGW front-end and optimized Ho:YLF CPA cooling, enabling operation at 5 kHz with comparable pulse energies. In the Ho:YLF CPA, a conversion efficiency of 38% from absorbed pump power to output power was achieved at 31 W average power. Planned Ho:YLF booster stages are expected to scale the 2.05 μm pulse energy to 30 mJ, placing few-millijoule, CEP-stable mid-infrared pulses via OPCPA within reach. This work demonstrates a compact, high-repetition-rate, cryogen-free mid-infrared femtosecond source suitable for next-generation attosecond experiments.

        Speaker: Chase Geiger (University of Ottawa)
      • 286
        Design of a Barium Ion Quantum Information Processing Testbed

        Barium ions have many interesting features suitable for scalable quantum information processing (QIP), such as metastable states with long lifetimes, enabling the manipulation of beyond-two-level systems, or qudits. Further, visible-wavelength transitions between Barium-ion energy levels enable programmable qubit/qudit manipulation using fiber-based optical modulators. Here, we discuss a QIP apparatus engineered for 16 individually programmable Barium ions. The system features a room-temperature vacuum chamber with pressure < 4e-11 mbar and high optical access from three directions for optical control and measurement of qubits/qudits. A laser-written waveguide, together with fiber-based acousto-optic modulators, enables precise, programmable control of the intensity, frequency, and phase of laser beams at each ion. These capabilities will enable powerful QIP tasks, such as creating arbitrary interactions between qubits to simulate fully connected quantum spin systems.

        Speaker: Mr Akbar J Jozani (University of Waterloo)
      • 287
        Separating nonlinear orders in pump-probe spectroscopy: the role of signal saturation

        Optical spectroscopy is a powerful tool for characterizing excitonic systems, ranging from solid-state photovoltaic materials to molecular light-harvesting complexes. Beyond the linear light–matter interaction, techniques such as pump-probe spectroscopy provide access to the nonlinear optical response of these systems. In the perturbative regime, the dominant contribution is the third-order response, which encodes key information on single-exciton dynamics and relaxation pathways. However, in extended systems or at pump intensities exceeding the perturbative limit, many-body effects arising from exciton–exciton interactions (such as exciton–exciton annihilation) can emerge, contaminating the effective third-order signal and adding nuances to the optical properties of the material in operando conditions.
        In this work, we present a recently introduced approach to disentangle these higher-order contributions and isolate specific nonlinear orders by analyzing signals acquired at different pump intensities [1,2]. A central step of this methodology is a quantitative understanding of the commonly observed experimental phenomenon of signal saturation with pump intensity, which is crucial for determining which pump intensities enable reliable order separation. Our analysis is based on an open-quantum-system modeling of exciton systems, combining coherent light–matter coupling with dissipative processes, and numerical simulations of pump–probe signals for prototypical molecular aggregates. We identify three distinct saturation regimes: (i) a coherent regime dominated by Rabi oscillations, (ii) an incoherent short-pulse regime in which signal saturation follows an exponential law, and (iii) an incoherent long-pulse regime characterized by a saturable absorption law.
        [1] P. Malý, J. Lüttig, P. A. Rose, A. Turkin, C. Lambert, J. J. Krich, and T. Brixner, Separating Single- from Multi-Particle Dynamics in Nonlinear Spectroscopy, Nature 616, 280 (2023).
        [2] J. J. Krich, L. Brenneis, P. A. Rose, K. Mayershofer, S. Büttner, J. Lüttig, P. Malý, and T. Brixner, Separating Orders of Response in Transient Absorption and Coherent Multidimensional Spectroscopy by Intensity Variation, J. Phys. Chem. Lett. 5897 (2025).

        Speaker: Federico Gallina
      • 288
        Predicting an Ultrafast Dynamic Refractive Index Grating in an Epsilon-Near-Zero Conductive Oxide Thin Film Created by Femtosecond Laser Pulses

        We simulate the ultrafast optical response of an Indium Tin Oxide (ITO) thin film near its Epsilon-Near-Zero (ENZ) wavelength when excited by two femtosecond laser pulses. A temperature dependent Drude model captures the nonlinear optical behavior arising from electron heating in the thin ITO film. Initial results show that ultrafast heating from two pulses induces a complex dynamic grating structure in the permittivity, revealing its potential for ultrafast optical modulation and control.

        Speaker: Megan Pulford
      • 289
        Quantum Dam Breaks in Expanding Bose–Einstein Condensates and the Emergence of Dispersive Waves, Double Rainbows, and Black-Hole Horizons

        The sudden removal of trapping potentials in Bose-Einstein condensates (BECs) gives rise to superfluid versions of hydrodynamic $\textit{dam-breaking}$ problems, leading to rich quantum dispersive-wave dynamics. Denoting $\Delta n/n_{0}$ as the initial difference in density between the two reservoirs on either side of the dam relative to the lower reservoir density $n_{0}$, we numerically and analytically study the Gross-Pitaevskii equation to model both perturbative $\Delta n << n_{0}$ and non-perturbative $\Delta n\sim n_{0}$ quantum dam-breaking scenarios. We compare and contrast the different regimes by making connections to the mathematics of tidal bores, and reveal an underlying wave structure that mimics the physics of double rainbows. In the non-perturbative regime dam-breaking can be accompanied by the formation of dynamical sonic event horizons within rarefaction waves. We comment on the horizon dynamics and the possibility of detecting sonic Hawking radiation from such horizons.

        Speaker: Liam Farrell (McMaster University)
      • 290
        Minimalistic framework for photonic unitary transformations

        Can you character count: We study a minimalistic framework for the realization of photonic unitary transformations based on thin phase gratings interleaved with free-space propagation. The problem of realizing a unitary transformation on a given number of modes is expressed as a simple matrix model in transverse momentum space. This approach drastically reduces the number of system parameters compared to general multiplane light converters. We demonstrate the optimization of the system parameters to target $d\times d$ unitary transformations on the diffraction orders for $d=2$ to $d=5$. The effect of the number of diffraction-propagation layers on the performance of the optical system is examined, showing that it can achieve unitary transformations such as the discrete Fourier transform, (generalized) Pauli gates, and the $T$-gate with high accuracy given a sufficient number of layers.

        Speaker: Manuel Ferrer Garcia (University of Ottawa)
      • 291
        Deterministic creation of blue-center single-photon emitters in carbon-doped h-BN

        Single-photon emitters (SPEs) in hexagonal boron nitride (h-BN) offer a promising platform for quantum photonics, due to their robust room-temperature operation and compatibility with two-dimensional device architectures. Among them, blue-centers (B-centers) emitting near 436 nm have drawn special attention for their reproducibility and spectral purity [1]. Previous studies show how B-center activation can be achieved by electron-beam irradiation of carbon-related defects [2]; however, deterministic and reproducible SPE generation remains a critical challenge for device integration. In this study, we investigate the deterministic creation of B-centers in carbon-doped h-BN. This material is first characterized using conventional techniques to evaluate its doping (electron microscopy, luminescence). We then follow a statistical approach where we explore the effects of flake thickness, electron dose and dwell time. This process optimization method enables maximizing single-photon emitter creation probability while ensuring spatial control. The resulting emission properties are analyzed via extensive optical characterizations including time-resolved photoluminescence spectroscopy and second-order photon correlation measurement. Then, we delve into the electrical behavior of defect-assisted tunneling in Gr/C:hBN/Gr junctions, both with and without preliminary electron-beam irradiation. Electrical characterization is performed directly on the photoluminescence (PL) optical setup, enabling simultaneous probing of the light emission from these devices. By analogy to previous work on different emitters [3], we expect to demonstrate controlled electroluminescence from the blue emitters, suitable for integration into more complex optoelectronic architectures for future quantum technologies.
        [1] C. Fournier et al., Nat. Commun. 12, 1–6 (2021)
        [2] S. Nedić et al., Adv. Opt. Mater. 12, 2400908 (2024)
        [3] M. Grzeszczyk et al., Light Sci. Appl. 13, 155 (2024)

        Speaker: Mr Gaurang Gautam (Université de Sherbrooke)
      • 292
        Singular Value Gadgets for Embedding QUBOs on Rydberg Atom Hardware

        Quadratic Unconstrained Binary Optimization problems (QUBOs) provide relevant testing grounds for quantum annealing platforms. However, mapping the logical QUBO onto a given quantum register is a known computational bottleneck. For Rydberg atom devices, the natural Unit-Disk topology arising from the Rydberg blockade mechanism limits the types of problems that can be naturally embedded on the hardware. Typical approaches to embedding include problem-based heuristics or general "gadget" reductions. The latter are graph-theoretic constructions that mediate interactions via ancillary qubits, typically incurring quadratic ancilla overhead. To address this scalability issue, we propose an approximate heuristic based on the spectral properties of the QUBO's adjacency matrix. By decomposing the QUBO into its singular modes, we use individual ancillas to mediate the structure of each relevant mode. We optimize for the position of the ancilla within a polygon defined by the logical qubits, while adjusting the local detuning of these ancillas to minimize the spectral distance between the QUBO and the physical Hamiltonian. Preliminary results suggest that this approach exhibits linear scaling with respect to the number of ancillary qubits. This work establishes a framework for developing further spectral embedding methods and provides an alternative for obtaining approximate solutions to larger problems.

        Speaker: Edward García Hernández (University of Ottawa)
      • 293
        Ultrabroadband pulse compression for extreme bandwidth two-dimensional electronic spectroscopy

        Two-dimensional electronic spectroscopy (2DES) has emerged as a fundamental technique to study ultrafast electronic structure, coherence, as well as energy and charge transfer in molecular systems. Correlating excitation and detection frequencies with femtosecond time resolution, 2DES is uniquely capable of providing comprehensive insight into electronic couplings and nonequilibrium dynamics. However, the performance of 2DES experiments is fundamentally limited by the spectral amplitude and phase properties of the ultrashort laser pulse sequence that creates the nonlinear signal. Pulse shaping is becoming an increasingly important topic as 2DES continues to push towards broader bandwidths and higher temporal resolution.

        Ultrafast pulse shaping allows the manipulation of the spectral amplitude and phase of ultrashort laser pulses. In combination with pulse shaping, Multiphoton Intrapulse Interference Phase Scan (MIIPS) provides a powerful, adaptive approach to characterization and compensation of higher-order spectral phase distortions. Unlike purely diagnostic techniques, MIIPS enables iterative dispersion correction directly at the sample plane to ensure near–transform-limited pulses across the full laser bandwidth. This is particularly important when using broadband femtosecond light sources, where residual dispersion and phase distortions can significantly degrade temporal resolution and signal fidelity. We present our work employing a 4-f pulse-shaper in combination with MIIPS to optimize various broadband continuum sources for 2DES experiments.

        Speaker: Jacob Bromberg (Department of Physics, University of Ottawa)
      • 294
        Individual Qubit Manipulation Using Dual Acousto-Optic Deflectors in a Trapped Ion Quantum Processor

        In a quantum processor, quantum degrees of freedom such as the internal states of trapped atomic ions form a quantum bit of information, or qubit, and can be manipulated by laser beams. To improve the scalability of trapped-ion quantum computing, as well as to implement a broader range of algorithms, individual qubit state manipulation is a necessity. For high fidelity quantum gates, some of the requirements are minimal relative intensity crosstalk on neighbouring ions, a fast switching time (microseconds or better), and high programmability of target addressing, especially when ion spacings are non-uniform. In this talk I will present our recent progress in realizing a large-scale (>30 ions) quantum processor with low crosstalk individual qubit manipulation using a dual acousto-optic deflector (AOD) system. The coherent Raman addressing delivery is enabled by mapping frequencies of the AODs to positions along the ion chain, carefully calibrated by optimizing the AOD output in an intermediate image plane before relaying it to the ions. One of the challenges of this system is mapping both AODs to each ion position with the exact same frequency, so as to not introduce any unwanted detunings. However, these have been mitigated through careful optics simulation and reiterative testing. We will also discuss the implications on the fidelity of quantum gates due to tightly focused laser beams, comparable in size to the extent of the ion's motional spread in space, and a novel mitigation strategy. Our dual AOD system allows for fast, programmable single- and two-qubit quantum gates for a wide range of quantum algorithms -- both digital and analog.

        Speaker: Sakshee Patil
      • 295
        Voltage-Driven Electro-Spray and Optoelectronic Evaluation of Gadolinium-Doped Zirconium Sulphide Thin Films for Enhanced Optoelectronic Applications

        Samuel O. Shaka, Godfrey E. Akpojotor, Merrious O. Ofomola, Cletus Olisenekwu.

        The zirconium sulphide (ZrS) metal has long been considered the most attractive material with fascinating physical, chemical, and optoelectronic properties. In this study, Gadolinium (Gd)-doped zirconium sulphide (ZrS/Gd) thin films were successfully deposited on fluorine-doped tin oxide (FTO) substrates using an electro-spray deposition technique at varying deposition voltages of 10.5 V, 11.0 V, and 11.5 V, aimed at optimizing their structural, optical, and electrical properties for photonic and optoelectronic applications. Comprehensive characterization of the films included structural, optical, electrical, and morphological analyses. X-ray Diffraction (XRD) was used to determine crystal structure and crystallite size, while Scanning Electron Microscopy (SEM) and Energy-Dispersive X-ray (EDX) verified surface morphology and elemental composition. Optical properties were obtained using a UV-Visible spectrophotometer, and electrical conductivity was measured with a four-point probe system. The result showed that UV–Vis spectroscopy enhanced absorbance in the UV region (300–400 nm), with 0.75 a.u absorbance at 11.5 V, while transmittance peaked at ~85% for the 11.5 V sample in the visible range. The calculated optical bandgap values decreased from 3.62 eV at 11.5 V to 3.21 eV at 10.5 V, indicating improved photon absorption with increased voltage. Optical conductivity reached a maximum of 0.73 S/m at 11.5 V, and refractive index peaked at 3.0 around 3.4 eV photon energy. Electrical analysis showed enhanced conductivity, increasing from 1.65 S/m in the undoped ZrS to 1.89 S/m at 11.5 V, while resistivity dropped from 0.61 Ω·m to 0.53 Ω·m. XRD analysis confirmed improved crystallinity and reduced dislocation density at higher voltages, with crystallite size ranging from 178 nm to 200 nm. SEM micrographs revealed uniform film deposition and nanoparticle agglomeration with no pinholes. These findings show that Gd doping and voltage-controlled deposition significantly enhance the optoelectronic properties of ZrS thin films, making them promising candidates for optoelectronics and photonics devices.

        Speaker: Dr Samuel Oghenemega Shaka (Delta State University, Abraka, Nigeria)
      • 296
        Nanoscale Stress and Magnetic Field Sensing with a Single NV Center in Bulk Diamond utilizing Inverse-Designed Photonic Structures

        We employ an isolated nitrogen–vacancy (NV) center as a quantum sensor to probe localized stress in a bulk diamond crystal and to measure magnetic fields, enabled by a monolithic inverse-designed photonic structure. This design allows efficient optical excitation of individual NV centers in a bulk sample and significantly enhances photon collection by engineering interference of the emitted light at the diamond–air interface. The resulting increase in collection efficiency improves the ODMR signal contrast and sensing sensitivity, while enabling reliable optical readout of single NV centers in bulk diamond. By integrating inverse-designed with quantum-defect-based sensing in bulk diamond, our approach provides a scalable platform for high-sensitivity, nanoscale quantum sensing, with potential applications in geosciences, biomedicine, and advanced materials characterization.

        Speaker: Pratik Adhikary (University of Waterloo)
    • DAPI Poster Session & Student Poster Competition | Session d'affiches DPAI et concours d'affiches étudiantes
      • 297
        Characterization of Polarization Mode Dispersion in High-Speed Optical Fibers: Effects of Differential Group Delay and Environmental Factors in Abuja, Nigeria.

        The rising internet use, digital transformation, and expanding telecom networks have increased demand for high-speed data transmission. However, Polarization Mode Dispersion (PMD) remains a major challenge, degrading signal quality and limiting data rates in optical fiber systems. This study investigates the impact of PMD on high-speed single-mode optical fiber systems by analyzing Differential Group Delay (DGD) and varying environmental conditions in selected areas of Abuja, Nigeria. Using two modulation formats; Non-Return-to-Zero (NRZ) and Quadrature Amplitude Modulation (QAM), the research examines how PMD-induced DGD affects key signal performance metrics, including Bit Error Rate (BER), jitter, Signal-to-Noise Ratio (SNR), and output optical power across fiber lengths ranging from 1 km to 40km. The experimental approach employed a time-domain method for direct DGD characterization using a test-bed consisting of a PMD emulator, Bit Error Rate Tester (BERT), polarization controller, optical power meter, and environmental sensors for temperature and humidity monitoring. All measurements were conducted at data rates between 15 Gbps and 240 Gbps and at central wavelengths of 1350 nm and 1450 nm.
        Results indicate that in NRZ systems, DGD increases from 0.266 ps at 1 km to 4.543 ps at 40.56 km, with jitter rising from 4.12 ps to 31.20 ps and SNR decreasing from 34.2 dB to 52.9 dB, depending on environmental variations. In QAM systems, DGD rises from 0.325 ps to 4.54 ps, with jitter increasing from 3.85 ps to 33.20 ps. Humidity changes from 53.1% to 59.5% caused DGD increases of up to 2.94 ps (NRZ) and 1.45 ps (QAM), while temperature variations from 28.34°C to 29.90°C produced further fluctuations. BER values remained within 1.12×10⁻⁷ to 1.34×10⁻⁷, reflecting DGD’s influence on symbol detection accuracy. Overall, NRZ exhibits greater sensitivity to PMD-induced DGD under varying environmental conditions, while QAM shows slightly higher jitter at longer fiber lengths. These results highlight the need to account for both modulation format and environmental factors when designing resilient optical communication networks in Abuja and similar sub-Saharan regions.

        Speakers: Dr Samuel Oghenemega Shaka (Delta State University), Ms precious Okoro (Federal University of Petroleum Resources, Effurun)
      • 298
        Competing Contributions to the Catalytic Activity of Barium Titanate

        The growing demand for environmentally friendly solutions to combat industrial pollutants has led to the exploration of nano-piezomaterials for water purification. This study reveals that piezoelectric effects contribute significantly, constituting around 90% of the catalytic response in BaTiO3 50nm compared to non-piezoelectric BaTiO3 10nm.

        Speaker: Akram Asadi (Institut National de la Recherche Scientifique (INRS), Centre Énergie Matériaux Télécommunications (EMT), Québec, Canada)
      • 299
        Multi-Channel Multi-Template Event Reconstruction in SuperCDMS

        The Super Cryogenic Dark Matter Search (SuperCDMS) SNOLAB is a direct detection dark matter (DM) experiment located 2 $\mathrm{km}$ underground at the SNOLAB facility near Sudbury, Canada. The experiment consists of 24 cylindrical kilogram-scale Germanium and Silicon detectors with Transition Edge Sensors (TESs) masks instrumented on the top and bottom surface. The unique mix of target substrates and detector technologies will allow SuperCDMS to probe the Weakly Interacting Massive Particle (WIMP) nuclear recoil DM parameter space with world-leading sensitivity in the mass range from 0.5 to 5 $\mathrm{GeV/c^2}$. SuperCDMS is currently in the commissioning phase and the first science run is expected to begin by mid-2026. From October 2023 to March 2024, the detectors in one of the four detector towers were characterized at the Cryogenic Underground TEst facility (CUTE), which is also located at SNOLAB. The DM sensitivity of the experiment depends on the position and energy resolution of the detectors. Therefore, correlated noise and pulse shape variations must be properly accounted for during the data processing. The SuperCDMS collaboration developed its own event reconstruction algorithm, $\mathrm{NxM}$, that simultaneously fits $\mathrm{N}$ detector channels with $\mathrm{M}$ pulse templates and is mathematically formulated to treat these effects. These improvements were tested on the data collected during CUTE tower test. This poster will introduce the algorithm, and show the results of its application to the data obtained at the CUTE facility.

        Speaker: Antoine Rehberg
      • 300
        Development of the GeV Cube high-energy electron detector

        The GeV Cube detector (GCD) is being designed as a new diagnostic for high-energy particle detection from short-pulse, high-powered laser sources [1]. In particular, the GCD will be employed to study the Ponderomotive acceleration of electrons from petawatt-class laser systems [2].

        The GCD is designed to detect and identify single electrons with energies in the range of 50 MeV to 1 GeV. It is comprised of three layers of 4x4 grids of optically isolated Bismuth Germanate (BGO) scintillator crystals affixed to a 4x4 array Silicon Photomultiplier (SiPM) detector [3]. Incoming electrons deposit energy in the scintillator crystals in the three layers, forming a track of the electron through the detector. Surrounding the path of the primary particle there is a plume of secondary scattered particles. Analysis of the energy deposited per layer along the primary particle track and the energy deposited per layer from the secondary particles in the surrounding pixels allows for the clear identification that a single high energy electron has been detected and an evaluation of its energy.

        Initial characterization of the GCD is carried out by detection of cosmic rays hitting the detector. Incident muons with average energies of 4 GeV [4] transit the detector similarly to high energy electrons. The gain on the SiPMs is reduced to give similar output signals per pixel as would be obtained with electrons. Geant4 simulations are employed to model the detector response to both electrons and muons. The details of the detector design and measured results will be presented.

        [1] M. C. Downer et al., “Diagnostics for plasma-based electron accelerators,” Rev. Mod. Phys., vol. 90, no. 3, Art. no. 035002, 2018, doi: 10.1103/RevModPhys.90.035002.
        [2] A. Longman et al., “Toward direct spatial and intensity characterization of ultra-high-intensity laser pulses using ponderomotive scattering of free electrons,” Phys. Plasmas, vol. 30, no. 8, 082110, 2023, doi: 10.1063/5.0160195.
        [3] Hamamatsu Photonics K.K., MPPC array S13361-3050AE-04, 2024.
        [4] J. L. Autran et al., “Characterization of atmospheric muons at sea level using a cosmic ray telescope,” Nucl. Instrum. Methods Phys. Res. A, vol. 903, pp. 77–84, 2018, doi: 10.1016/j.nima.2018.06.038.

        Speaker: Caleb Guthrie
      • 301
        Modeling of digitally photocorroding GaAs/Al0.35Ga0.65As nanoheterostructures

        Quantum Semiconductors and Photon-based BioNanotechnology Group, Laboratoire Nanotechnologies et Nanosystèmes (LN2) - CNRS IRL-3463, Interdisciplinary Institute for Technological Innovation (3IT), Department of Electrical and Computer Engineering, Université de Sherbrooke, 3000, boul. de l’Université, Sherbrooke, Québec J1K 0A5, Canada

        Digital photocorrosion (DIP) of a semiconductor is a two-step cyclic process consisting of an irradiation phase followed by a dark phase.[1] DIP applied to GaAs/Al$_{0.35}$Ga$_{0.65}$As heterostructures has been investigated for the detection of electrically charged biomolecules, such as bacteria and spores immobilized on the surface of such devices. Promising biosensing performances have been reported, particularly for DIP on a single pair of GaAs/AlGaAs nanolayers.[2]
        Although DIP processed GaAs/AlGaAs surfaces offer favorable conditions for the formation of high quality alkanethiol self-assembled monolayers [3], extending DIP to multiple pairs of GaAs/AlGaAs nanolayers for repeated biosensing has proven challenging. Progressive accumulation of Ga- and Al-byproducts on the surface of biochips processed in phosphate buffered saline, and the relatively complicated structure of photoluminescence (PL) intensity maxima (PL$_{\text{MAX}}$) revealed during photocorrosion of GaAs/AlGaAs nanolayers were identified as the potential key factors limiting the successful implementation of this technology for quasi-continuous biosensing.
        To shed the light on the dependence of PL intensity emission on the absorption depth of PL exciting photons, we carried out numerical simulation of the DIP process for GaAs nanolayers separated by different thickness of AlGaAs nanolayers. The modeling indicated that complex shape of PL$_{\text{MAX}}$ for a structure consisting of a GaAs (6 nm)/AlGaAs (10 nm) multilayer stack originates from a weak PL emission by deeply located GaAs nanolayers in the microstructure. This has been verified by the DIP experiments involving GaAs/AlGaAs nanoheterostructures with the thickness of AlGaAs layer increased to 20 nm. The results suggest a possibility of designing nanoheterostructures with stacks of GaAs/AlGaAs nanolayers exhibiting PL emission exclusively from individual GaAs nanolayer during the DIP process. In addition to the increased thickness of the PL screening AlGaAs nanolayer, this effect can be tuned by selective doping of the investigated nanoheterostructures. Nanoscale resolution of the DIP process of GaAs/AlGaAs nanoheterostructures is of the potential interest for the fabrication of advanced devices at attractive cost.


        [1] M.R. Aziziyan et al., ACS Applied Materials & Interfaces, 11 (2019) 17968-17978.
        [2] E. Nazemi et al., Biosensors and Bioelectronics, 93 (2017) 234-240.
        [3] R. St-Onge et al., Applied Physics Letters, 118 (2021) 222102.

        Speaker: René St-Onge
      • 302
        Real-time Machime Learning for ARGO Data Acquisition System

        Physicists continue to invest significant effort in the search for dark matter using increasingly large and sensitive detectors. ARGO is a next generation liquid argon (LAr) experiment designed to achieve enhanced sensitivity through advanced photodetection and large-scale instrumentation. The detector design under study employs Single Photon Avalanche Diodes (SPADs) with digital readout over a total instrumented surface of approximately 200 m², requiring the simultaneous handling of millions of data channels. This scale presents significant challenges for data acquisition systems in terms of power consumption, cabling complexity, and data storage, motivating the use of real-time data processing and reduction near the detector.

        In this work, we investigate the use of real-time machine-learning (ML) techniques as part of the ARGO data acquisition chain. One convolutional neural network model (CNN) classifies particle interactions, while another reconstructs event position. Performance is evaluated using particle identification accuracy and position reconstruction error distributions. Ongoing work explores integrating both tasks into a unified CNN model to improve performance and reduce edge computing requirements.

        Speaker: Sajedeh Esmaeilzadeh
      • 303
        Optimising a non-invasive viscosity measurement with portable NMR

        There is an acute need for non-invasive measurements of viscosity in bioactive solutions (formulations). Transition times from a stationary sample to a solid body rotation carry information on viscosity. We demonstrate a portable NMR measurement to detect transition times as a proof-of-principle, with further improvements in the works.

        Speaker: Vincent Belzile (University of New Brunswick)
      • 304
        DarkSide-20k Inner Detector Design and Construction

        Weakly Interacting Massive Particles (WIMPs) are among the most compelling candidates for particle dark matter, and their direct detection remains one of the central goals of astroparticle physics. Dual-phase liquid argon time projection chambers have emerged as one of the leading technologies for probing light galactic dark matter, particularly for masses below $10~\mathrm{GeV}/c^{2}$, as demonstrated by the DarkSide-50 experiment using 50~kg of underground liquid argon. Motivated by the excellent background rejection and sensitivity achieved by DS-50, scaling this technology to multi-tonne target masses is essential to reach background-free operation and extend direct detection searches toward the neutrino floor.

        DarkSide-20k is a next-generation cryogenic dark matter experiment hosted at the Laboratori Nazionali del Gran Sasso, designed to achieve instrumental background-free sensitivity in the search for WIMP interactions. The experiment employs a 50-tonne dual-phase liquid argon time projection chamber as its inner detector, using underground argon as the active target to suppress intrinsic radioactive backgrounds. The inner detector is based on an ultra-pure octagonal PMMA vessel, with a field cage implemented using a commercial conductive polymer coating (Clevios) with better than $100~\mu$m uniformity, and inner surfaces lined with TPB-coated enhanced specular reflector foils to efficiently shift argon scintillation light to 420~nm for optical detection. Dual top/bottom planes house $\sim$200k cryogenic FBK NUV-HD SiPMs organized in 528 photodetector modules (PDMs) with $>40\%$ photon detection efficiency.

        This presentation focuses on the design of the DarkSide-20k inner detector, with particular emphasis on the mechanical implementation of the PMMA vessel, field cage integration, and optical system layout. The architecture of the silicon photomultiplier--based photon detection system will be discussed, including the organization of SiPM arrays into photodetector modules and their role in achieving high photon detection efficiency and improved pulse-shape discrimination. Key design considerations relevant to radiopurity, and detector performance will be highlighted.

        Speaker: Mr Shashank Mishra (Queen's University)
      • 305
        Measurement of sidewall roughness using AFM

        Due to the ever-larger number of ever-smaller devices, the morphological perfection of the devices' surfaces, and the related characterization techniques, have become increasingly important for micro-electromechanical systems, nanoelectronics, opto-electronic, and even more critically, photonic devices, due to the significant effects surface topography has on the device performance. A well-established technique for three-dimensional surface characterization is atomic force microscopy (AFM), capable of high resolution and high accuracy. However, the typical AFM measurement geometry is designed to characterize surfaces parallel to the main stage surface (generally close to the direction perpendicular to the AFM tip axis).
        Here we report on the adaptation of a commercial AFM system for sidewall imaging and characterization, therefore for scanning, imaging and characterization surfaces whose orientation are far away from the horizontal plane, ultimately that are perpendicular to the main stage surface. We present sidewall roughness measurements obtained on several model samples, particularly on optical waveguide structures, that are be used in photonic devices, for which surface roughness is crucial.

        Speaker: Catalin Harnagea
      • 306
        SiPM Characterization for Improved External Cross-Talk Modeling in DarkSide-20k

        Silicon photomultipliers (SiPMs) are increasingly adopted as the photosensors of choice in next-generation dark matter experiments due to their high photodetection efficiency and low operating voltage. In large detector arrays, however, SiPM performance can be significantly degraded by correlated noise processes, particularly external cross-talk, in which photons emitted from the triggering of one sensor induce spurious signals in neighboring devices. Understanding and mitigating this effect is critical for achieving the sensitivity required for rare-event dark matter searches such as DarkSide-20k.

        I present a comprehensive experimental characterization of DarkSide-20k SiPMs performed under experiment-like cryogenic and vacuum conditions. Measurements were carried out using the Vacuum Emission Reflection and Absorption (VERA) setup at TRIUMF to determine key parameters involved in quantifying correlated noise, including dark count rate (DCR), breakdown voltage, and angular photodetection efficiency.

        A particular focus of this work is the measurement of the angular dependence of SiPM photodetection efficiency, which directly influences the probability and spatial distribution of external cross-talk in densely packed SiPM arrays. The measured angular response provides new data that will be used in Monte Carlo simulations of correlated noise in DarkSide-20k. These results enable more realistic modeling of external cross-talk and allow quantitative assessment of its impact on detector performance and overall experimental sensitivity as a function of detector geometry and operating parameters.

        The characterization data presented here represent an important step toward improving the accuracy of SiPM noise modeling in large noble-liquid detectors. Incorporating these measurements into simulation frameworks will aid in the optimization of detector design and experimental geometry, contributing to enhanced sensitivity in current and future SiPM-based dark matter searches.

        Speaker: Lucas Backes (TRIUMF)
      • 307
        Retrofittable Absolute Positioning Motorized Optics Mounts

        Experimental apparatuses in atomic, molecular, and optical (AMO) physics require precise control of the free-space alignment of optical components. Such control is often achieved by actuating micrometer screws on various optical mounts. Fluctuations in environmental factors, such as temperature and humidity, necessitate re-optimization of the alignment.

        Motorized kinematic mounts enable control over the alignment of the optical setup by actuating the micrometer screws with electromechanical devices. While these devices are commercially available, they can be cost-prohibitive, bulky, and difficult to retrofit into an existing experiment. Previous instances of home-built motorized screw actuators were designed with open-loop control, reducing their reliability for long-term deployment.

        In this work, we demonstrate that a 3-D printed modular attachment can be integrated with a Thorlabs KM100 mirror mount to enable angular control. We use a rotary encoder attached to the mirror mount actuator screw and mechanically decoupled from any moving elements to demonstrate absolute tip and tilt positioning of the mirror with high resolution and repeatability. The attachment we develop is compact and can be added to an existing optical setup without significantly degrading its alignment. The electronics system necessary for controlling the motorized attachment is based on off-the-shelf components designed for 3-D printer hardware control.

        Speaker: Andrew Lehmann (University of Toronto)
    • DASP Poster Session | Session d'affiches DPAE
      • 308
        Search for ultra-high energy cosmic rays with the fluorescence telescope onboard the ISS

        The air fluorescence detection technique is widely used in ultra-high energy cosmic ray experiments to reconstruct the shower cascade profile and determine both the energy and arrival direction of the primary particle. Using the CORSIKA air shower simulation code, the evolution of extensive air showers induced by primary particles with energies reaching up to 100 EeV is modeled, considering various combinations of high- and low-energy hadronic interaction models. The study examines the sensitivity of key parameters, including the longitudinal particle distribution, the depth of the shower maximum, and the energy deposited in the atmosphere, under different simulation conditions. The implications of these findings for UHECR experiments are explored, with a particular focus on a space-based fluorescence telescope aboard the International Space Station. Additionally, the number of fluorescence photons and the timing profile of their arrival at the detector pupil are computed for typical extreme-energy events.

        Speaker: Prof. Taoufik Djemil (Université Badji Mokhtar)
      • 309
        The Metrology of Meteorology: how measurement science enables climate science and action

        When I speak of Metrology to people, they often think I’m talking about Meteorology …until they get very confused. That’s because Metrology – the science of measurement – despite playing a fundamental role in our everyday lives, is mostly working behind the scenes. Metrology is best known in industrial contexts, where interoperability and matching parts is paramount to doing business. Metrology is based on two main concepts to ensure measurement comparability and reliability: traceability and uncertainty.

        Historically, Atmospheric Science, Oceanography and Ecology, to name only a few, have not spent much time thinking about Metrology. After all, much has been achieved without it! Nowadays, however, climate models are growing more detailed, and climate action requires reliable monitoring. With carbon emissions and capture becoming transactional (think carbon credits), measurement standards with known uncertainty and traceability to the International System of Units are increasingly important.

        In 2023, the Bureau International des Poids et Mesures (BIPM) and the World Meteorological Organization (WMO) released a report entitled Metrology for Climate Action, kickstarting a joint effort to improve the traceability and uncertainty of all climate systems measurements. These improvements will support the physical science basis of climate science through increasing model accuracy, facilitating collaboration around the world through better measurement comparability, stability, and continuity, even across different measurement methods. They will also strengthen climate action by increasing confidence in carbon credit valuations, improving policy monitoring, and reducing greenwashing.

        But applying metrology principles to complex climate system measurements is no easy task. It requires for the metrology experts and climate experts to discuss to better understand each other, thinking outside of the box, and a lot of creativity. In this presentation, some of these complex questions and creative solutions will be explored, while highlighting the value of bringing metrology and meteorology together.

        Speaker: Stéphanie Gagné (National Research Council Canada)
      • 310
        Another look at the equation of time

        Historically the equation of time ET, giving the difference between the solar mean time and true solar time, has been useful to improve estimates of latitudes. Today with GPS accurate positioning systems, and with accurate atomic clocks, the equation of time is seldom of practical use. It is nonetheless of interest from a basic physics standpoint because of its connection with Earth orbital properties responsible for the difference between the mean solar time, and actual solar time. This presentation will explain the equation of time first using a phenomenological approach, and then, more rigorously mathematically.

        Speaker: Prof. Richard Marchand
    • DCMMP Poster Session & Student Poster Competition | Session d'affiches DPMCM et concours d'affiches étudiantes
    • DNP Poster Session & Student Poster Competition | Session d'affiches DPN et concours d'affiches étudiantes
    • DPE Poster Session & Student Poster Competition | Session d'affiches DEP et concours d'affiches étudiantes
      • 311
        How to make your research group more inclusive for autistic trainees

        One of the most important responsibilities of university faculty is mentoring the junior members of our research groups and creating an inclusive environment in which they can thrive. Since my autism diagnosis in 2022, colleagues have asked me how they can make their research groups more welcoming to autistic trainees. This short guide, based on conversations with autistic students and academics, intense reflection on my own lived experience, and a deep dive into the literature, provides five concrete steps toward this goal.

        Speaker: Heather Logan (Carleton University)
    • DPMB Poster Session & Student Poster Competition | Session d'affiches DPMB et concours d'affiches étudiantes
      • 312
        Probing the Translocation Dynamics of Linear Polysaccharide Polymers: Solid-State Nanopore Sensing of Heparin

        Elucidating the complex roles of carbohydrates in health and disease requires the development of novel biophysical methods; however, single-molecule analysis of these polymers remains a formidable challenge due to their structural heterogeneity and rapid dynamics. While solid-state nanopores offer a platform for label-free single-molecule sensing, the characterization of short polysaccharide chains—such as the widely used anticoagulant heparin—is severely limited by their high-speed translocation. In this work, we investigate the capture kinetics and translocation dynamics of heparin through bare silicon nitride nanopores. We examine the pore-molecule interactions as a function of pH to map the conformational dynamics of the polyelectrolytes during driven translocation. We discuss the underlying physical forces driving these rapid events and propose practical nanoscale modifications to reduce translocation speed, paving the way for high-fidelity carbohydrate analysis.

        Speaker: Kexin Zhao (Department of Physics, University of Ottawa)
      • 313
        Modeling Temperature- and Salt-Dependent Fold Switching in the Metamorphic Protein Lymphotactin

        Proteins are often described by a single-funnel free-energy landscape leading to one native structure, yet metamorphic proteins reversibly interconvert between distinct folded states under physiological conditions. Lymphotactin (XCL1) is a striking example, exhibiting a chemokine-like monomeric fold and an alternative all-β dimeric fold whose equilibrium populations shift with temperature and salt concentration, implying an interplay between two native folds. Here, we developed a coarse-grained, dual-basin structure-based model to dissect how temperature, salt, and dimerization jointly control this fold switch. By tuning the relative strengths of monomer- and dimer-associated native contacts, we reproduce the qualitative experimental trend that the chemokine fold dominates at lower temperatures while the alternative dimer becomes increasingly populated near physiological temperatures, followed by unfolding at higher temperatures. To capture salt effects, we introduce an effective salt parameter that selectively rescales native contacts between charged residues, strengthening unlike-charge contacts and weakening like-charge contacts as screening is reduced; this produces a pronounced population shift toward the alternative fold and yields a temperature–salt phase diagram with a broad coexistence region where both folds are significantly populated. Analysis of charged-contact networks shows that the fold switch is characterized by a major redistribution of electrostatic contacts, including the formation of stabilizing unlike-charge interactions across the dimer interface. Finally, we extend the same framework to point mutations and reconstructed ancestral XCL1 variants, which have experimentally been shown to have different fold population propensities. We identify when charge-charge interactions are sufficient to explain fold-population trends and where additional sequence-dependent effects beyond native-charge contacts may be required.

        Speaker: Dr Bahman Seifi (Department of Physics and Physical Oceanography, Memorial University of Newfoundland and Labrador)
      • 314
        Exploring Individualized Stimuli for Auditory Evoked Responses

        Objective physiological tests are needed to assess hearing abilities of children who are not developmentally mature enough for behavioural hearing tests. One such physiological test to assess hearing sensitivity is the envelope following response (EFR). EFRs are scalp-based recordings of neural activity at the brainstem level that follows the envelope of sounds, such as naturally spoken vowels. Although EFRs have shown promise for inferring hearing acuity, low amplitude responses can be obscured by the electrophysiologic noise floor, reducing measurement accuracy and certainty in clinical decision-making. Past studies in our lab have shown that EFR amplitude is influenced by phase differences between pairs of vowel harmonics. Phase differences can be manipulated to increase EFR amplitudes based on a cochlear model of an average ear, however, the success of this approach is highly variable across participants.

        To improve the optimization accuracy per individual, the present study aims to assess the utility of otoacoustic emissions (OAEs) to personalise the adjustment of harmonics phases. OAEs are low level sounds emitted by the inner ear (cochlea) in response to acoustic stimuli and they faithfully reflect cochlear mechanical processing. OAEs could thus aid in the estimation of stimulus travel time and individualized phase accumulation. By measuring each participant’s OAEs at the same frequency regions as the EFR stimulus, we will be able to design individualized EFR stimuli with optimal harmonic phase relationships for each vowel. If successful, this would be a first step towards a more time-efficient and individualized hearing test using vowel sounds. In addition, this work will also improve our understanding of how EFRs are initiated in the cochlea.

        Speaker: Elizabeth Allison (Western University)
      • 315
        In Silico Estimation of Brain Optical Properties with Time-Resolved Near-Infrared Spectroscopy Using a Two-Detector Two-Layer Photon Diffusion Model

        Time-Resolved Near-Infrared Spectroscopy (tr-NIRS) is a non-invasive optical technique that shows potential for bedside neuromonitoring of patients with or at risk of brain injuries. Current tr-NIRS analysis methods typically assume the head is optically homogeneous; however, such approaches are too sensitive to changes in the scalp, leading to inaccurate estimates of brain optical properties. Two-layer photon diffusion model inversion algorithms (TLPMI) are a class of algorithms that model the adult head as a two-layer medium to reduce the influence of the scalp on the estimation of cerebral optical properties. Implementations of TLPMI may include two detectors: a short-distance detector, used to estimate only the optical properties of the top-layer (scalp, skull, cerebrospinal fluid), and a long-distance detector, used to recover properties of the bottom layer (brain). In this study we will use two-detector TLPMI to estimate absolute optical properties in the brain, with a focus on the absorption coefficient. Using digital phantoms generated with both analytical and Monte Carlo methods, we will show how the brain layer absorption coefficient estimated with TLPMI is biased by detector configuration, mischaracterization of top-layer optical properties, fitting parameter tolerance, and medium heterogeneity. Hyperspectral data analysis will also be discussed. The consequences of the errors in TLPMI will be reviewed in reference to the recovery of clinically significant chromophores. This study will aid prospective researchers in effectively applying TLPMI and analyzing its results.

        Speaker: Aria Riahi (University of Western Ontario)
      • 316
        Deep Learning-Based Target Segmentation for MRI-Guided HDR-Brachytherapy: A Pilot Study

        Cervical cancer remains a major global health burden, with Image-guided High-Dose-Rate Brachytherapy (HDR-BT) serving as a critical treatment modality, delivering precise radiation to the tumor while sparing organs at risk. While MRI is the gold standard for defining the Gross Tumor Volume (GTV), High-Risk Clinical Target Volume (HR-CTV), and Intermediate-Risk Clinical Target Volume (IR-CTV), manual segmentation is time-intensive and prone to inter-observer variability. Despite the clinical shift toward MRI guidance, automated segmentation tools remain predominantly CT-based. This pilot study evaluates the feasibility of a Convolutional Neural Network (CNN) framework for MRI-based segmentation and outlines a roadmap for integrating orthogonal views and Large Language Models (LLMs) to enhance workflow efficiency.

        A pilot cohort of 12 patients with gynecological malignancies were selected. Ground truth contours were manually delineated by radiation oncologists. An automated pipeline using a self-configuring 3D U-Net architecture was developed. To address the nested nature of the target volumes, a hierarchical labeling strategy was implemented where regions were trained as exclusive shells and reconstructed into cumulative clinical volumes during inference.

        The model achieved promising concordance for larger volumes, with a mean Dice Similarity Coefficient (DSC) of 0.69 ± 0.08 for HR-CTV and 0.70 ± 0.08 for IR-CTV. GTV performance was lower (DSC 0.16 ± 0.15), attributed to data saturation where the model correctly identified high-risk regions but conservatively classified the GTV core as HR-CTV due to limited training variance. Surface distance metrics were robust, yielding a mean HD95 of 9.4 ± 4.2 mm for HR-CTV and 11.2 ± 5.0 mm for IR-CTV.

        This pilot study confirms the pipeline's stability ahead of an expansion to a ~300-patient dataset, which is expected to resolve GTV generalization constraints. Next steps involve extending the pipeline to a multi-view framework, integrating contours from sagittal and coronal planes to augment axial predictions. A novel multi-modal integration phase utilizing LLMs will also be deployed to encode clinical notes and diagnostic reports. By embedding this contextual data alongside imaging data, our aim is to refine predicted contours, specifically the IR-CTV boundary, which relies heavily on pre-treatment diagnostic context.

        Speaker: Soroush Ghomashchi (University of Toronto)
      • 317
        Synthesizing higher-resolution transmission images from rastered pencil beams in a simultaneous-primary-scatter x-ray imager (SPSxi)

        X-ray imaging in medicine, industry, and security can be limited by low contrast between materials. Our lab is developing a Simultaneous-Primary-Scatter x-ray imager (SPSxi), akin at a basic level to combining bright field and dark field imaging in microscopy. Radiation from a rotating-anode x-ray tube is collimated into pencil beams using tungsten/copper plates and the object is step-and-shoot raster scanned through the beams. The resulting radiation distributions are recorded by a PerkinElmer XRD 0840 AN13 Gd2O2S flat panel sensor, analysed, and a transmitted image plus a stack of scatter images at different angles out to about 10° are built point-by-point. The spatial resolution of the scatter image is determined by the pencil beam diameter, and thus there is a tradeoff between spatial resolution and scatter signal size. In our original SPSxi design, the pixel sizes of the scatter and primary images were the same, and equal to the distance that the object was stepped between acquisitions, in the neighbourhood of 1 mm.

        Here we report on increasing the spatial resolution of the primary image, which is set by different parameters than the scatter image. In our original design, the primary signal was averaged across the pencil beam. In our current design, the intra-beam information is preserved. At the imaging detector, the primary beam exceeds 4 mm in diameter, while the detector element size is 400 μm. Thus a small circular pattern of about 80 samples 400 μm square can be recorded for each x-ray exposure. As the object is stepped through the beam, most object locations are sampled multiple times, by different parts of the primary beam, because the sequential primary beam footprints overlap on the object. We use Python code to position and place the primary beam information into a higher-resolution accumulation matrix. Since the SPSxi acquisition step size is not generally an integer multiple of 400 μm, we use an accumulation matrix with a finer grid, such as 100 μm. To generate the transmission image, the accumulated data are normalized by data from an air scan with no object present. Recent results are shown.

        Speaker: Gabrielle Z. Lachance (Dept. Physics, Carleton Univ., Ottawa, Canada)
      • 318
        Evaluating an approximation of biological tissue photon propagation model using simulations

        Modelling photon propagation in biological tissue is crucial for developing effective in vivo optical spectroscopic methods, such as those used to quantify blood oxygenation and the concentration of cytochrome c oxidase in its various redox states. However, current analytical models of light propagation in tissue are complex, making their implementation and usage quite challenging. Recently, an approximation of the photon flux model was proposed; log(R) = log(1+ρ√(3μaμ's) ) -log(μ's)+C, where R is the measured reflectance, ρ is the source-detector separation, and C is a constant that can be removed by taking the derivative of the equation. The aim of this project was to test the validity of this simplified model against the standard equation. The differences between the models are quantified and both are fitted to in silico data to evaluate their ability to extract relevant physiological parameters such as the concentration of oxygenated and deoxygenated hemoglobin, cytochrome c oxidase, and water content. Fitting was performed over a wavelength range of 700nm to 900nm, to mimic typical spectral range used for in vivo tissue spectroscopy. While the approximation is simpler, and therefore more convenient, it fails to properly fit to the original equation. Notably, the results showed that using the simplified equation could lead to errors of up to 99% for parameters such as cytochrome c oxidase and oxyhemoglobin, 25% for water and 21% for deoxyhemoglobin concentrations. These findings suggest that this approximation is inaccurate and should not be used to estimate the aforementioned tissue parameters.

        Speaker: Darya Sukhina
      • 319
        Preclinical Evaluation of Assigning Anatomical Labels in Magnetic Resonance Imaging Using a Transgenic 5XFAD Mouse Model of Alzheimer’s Disease

        Alzheimer’s disease (AD) is the most common form of progressive neurodegenerative dementia and a leading cause of death worldwide. The definitive cause of AD remains unknown, but its development is a multifaceted etiology. Early AD diagnosis is crucial as pathology begins decades before symptoms appear, and its diagnosis can only be confirmed post-mortem. Imaging techniques such as magnetic resonance imaging (MRI) provide insight into the distinct patterns of brain inflammation, followed by degeneration. In both clinical and preclinical imaging, MRI enables non-invasive, longitudinal, volume-based quantification of regions of interest with sensitivity to multiple tissue properties. Despite advances in preclinical imaging, there is limited consensus on standardized, quantitative methods for assigning anatomical labels to brain regions in mouse models relating to cognitive decline. From a medical physics perspective, reliable anatomical labeling is essential for quantitative MRI, yet automated pipelines remain under-validated in preclinical disease models.
        This project addresses this challenge using the most aggressive disease model, 5XFAD transgenic mice, to detect early differences in brain anatomy. The primary objective is to determine whether MRI can be used as an early diagnostic tool by detecting age-dependent structural changes across the lifespan using a 1 T M2™ Compact High-Performance MRI System (Aspect Imaging) and a 3D imaging method. A major challenge in pre-clinical studies is the lack of accessibility and interrater variability in quantifying neurodegeneration. This project explores diverse MRI processing strategies, including manual segmentation using the Allen Mouse Brain Atlas and the computational pipeline AIDAmri. By exploring a variety of techniques, the anticipated outcome is to evaluate the robustness through cross-method agreement, age-dependent consistency, and sensitivity to preprocessing choices in the 5XFAD model, not previously characterized by the AIDAmri pipeline. This research advances techniques for automated anatomical labeling in preclinical MRI and supports the development of standardized imaging pipelines applicable to longitudinal therapeutic investigations of neurodegeneration.

        Speaker: Alex Stoinescu (University of Windsor)
      • 320
        Modelling the motility patterns of peritrichous bacteria in the presence of varying chemoattractant and chemorepellent gradients

        The motility of peritrichous bacteria, such as Escherichia coli (E. coli), is largely governed by “run-and-tumble” movements. E. coli have approximately 6 flagella which they use to propel themselves through viscous media. When these flagella rotate in a counterclockwise manner, the E. coli “run”, and are pushed in a steady direction forward. When the flagella rotate in a clockwise manner, the E. coli “tumble”, leading to a largely erratic motion. These types of movements alternate, resulting in the E. coli performing a random walk. However, this random walk becomes biased in the presence of chemoattractants and chemorepellents due to chemotaxis. During chemotaxis, E. coli will perform a varying number of tumbles and change the duration of runs to follow a gradient to areas of more chemoattractant and less chemorepellent. By simulating the motion of E. coli using a biased run-and-tumble model in Python, patterns in the duration and frequency of runs and tumbles, as well as bacteria cluster sizing, will be measured to elucidate underlying patterns in how E. coli move through different chemoattractant and chemorepellent gradients.

        Speaker: Justus McRae (Brock University)
      • 321
        Spectroscopic Phasor Cross Correlation: A Novel Framework for Dynamic OCT Contrasts

        Motivation/Background: Optical Coherence Tomography (OCT) signals provide highly sensitive measures of reflective elements in target tissue. The speckle of an OCT signal can capture cellular dynamics using a set of techniques referred to as dynamic OCT, which produces contrasts corresponding to molecular movements. OCT time signals also contain spectral information which have been used in pulse oximetry (1), intravascular lipid mapping (2), and chromophore identification (3). However, it hasn’t been applied to dynamic OCT methods that track changes in molecular movements over time.
        Methods: In this study, a potential methodology for applying spectroscopic information for the development of dynamic OCT contrasts was illustrated. After computing the short-time Fourier transform, spectral contributions of OCT speckle were detected by performing cross correlation of the total complex phasor in each voxel with complex phasors in each spectral bin.
        Results: Various spectral bands have shown activity in different layers of cornea, indicating different types of cellular activities. By correlating the evolution of component phasors with the total phasor at an OCT voxel, a series of images from a corneal B-scan over time were created, that show the spectral basis of fluctuations in the OCT signal.
        Discussion/Conclusion: This method has the potential to create a new set of dynamic OCT contrast that may reveal unique characteristics of scatterers such as particle size and sub-resolution morphology. With this new dimension of OCT signal analysis, the biological mechanics of OCT biopsied tissues may be described in finer detail.

        Speaker: Vethushan Ramalingam (University of Waterloo)
    • DPP Poster Session & Student Poster Competition | Session d'affiches DPP et concours d'affiches étudiantes
      • 322
        The impact of spatio-temporal laser parameters on betatron x-ray generation from a laser-wakefield accelerator

        Laser-wakefield acceleration (LWFA) occurs when an intense laser pulse drives a high-amplitude plasma wave in under dense plasma, trapping and accelerating electron bunches to relativistic energies over millimeter to centimeter scales. During the acceleration process, transverse electron oscillations within the plasma wakefield produce high-brightness, ultrashort "betatron" x-rays. The small source size and few femtosecond duration of these x-rays is well-suited for time-resolved imaging and absorption spectroscopy, supporting medical diagnostics, laboratory-scale astrophysics, and the study of microstructural dynamics in advanced materials. However, the practical use of betatron sources as a scalable tool for x-ray probing requires minimization of shot-to-shot fluctuations in x-ray flux and pointing.

        This work addresses the critical need to improve source stability and characterization for high-throughput imaging applications. We investigate the impact of spatio-temporal laser parameters, specifically Group Delay Dispersion (GDD), Third Order Dispersion (TOD) and Pulse Front Tilt (PFT) on the LWFA process at the Advanced Laser Light Source (ALLS) in Quebec, Canada. Using the Ti:sapphire laser system at ALLS, delivering 3.2J of energy on target at 2.5Hz repetition rate with a pulse duration of ~20 fs, we demonstrate that a careful management of PFT is essential to mitigate off-axis x-ray beam steering and asymmetric electron injection. With such optimized laser conditions, stable betatron x-ray emission supports high-resolution imaging and tomographic scans.

        Speaker: Mr Abdulhakeem Yusuf (University of Alberta)
      • 323
        Diamagnetic Dynamo Driven Current Transport on EAST Tokamak

        The fluctuation-induced dynamo electric field has been measured in the core of high-temperature EAST tokamak plasmas using Faraday-effect polarimetry and electron cyclotron emission (ECE). The magnetic amplitude of the kink mode (m/n = 1/1) saturates at 30–50 Gauss inside the q = 1 resonant surface. Electron temperature fluctuations reach up to 10% near the resonant surface, where the gradient of electron pressure exhibits a local maximum. These temperature fluctuations are predominantly driven by magnetic perturbations, and a correlation between electron temperature and radial magnetic fluctuations gives rise to a non-vanishing parallel dynamo electric field on the order of 10 mV/m, which is comparable to the resistive electric field (η∥ J). The dynamo electric field is capable of flattening the current profile, thereby facilitating the achievement of "hybrid" modes in a steady-state magnetic equilibrium.

        Speaker: Prof. Wenzhe Mao (University of Science and Technology of China)
      • 324
        Multivariate Quantification of Gold Nanoparticles Using Nanoparticle-Enhanced LIB

        Accurate quantification of gold nanoparticles (AuNPs) is critical for applications in nanotechnology, biomedicine, and functional materials. In this study, nanoparticle-enhanced laser-induced breakdown spectroscopy (NELIBS) is developed as a quantitative optical sensor for direct determination of AuNP concentration using
        multivariate calibration.

        NELIBS relies on the deposition of metallic nanoparticles onto a substrate prior to laser irradiation. Upon laser excitation, localized surface plasmon resonance(LSPR) in the metallic nanoparticles induces strong near-field enhancement,lowering the breakdown threshold and promoting more efficient plasma formation. This localized electromagnetic amplification modifies plasma temperature,
        electron density, and emission intensity, thereby enhancing spectral sensitivity compared to conventional LIBS.

        Colloidal 20 nm AuNPs at ten concentration levels were deposited onto titanium and aluminum substrates and analyzed using 532 nm and 1064 nm Nd:YAG excitation. Instead of single-line intensity calibration, full-spectrum partial least squares regression (PLSR) was employed to construct predictive models. The optimized PLSR model at 532 nm exhibited excellent performance (R² = 0.999) with relative prediction error below 5% and a limit of detection of ~2.5 ppm. Although 1064 nm excitation produced stronger emission enhancement at optimal
        AuNP concentrations, its quantitative stability was lower (~13% relative error).

        Plasma diagnostics, including electron temperature and electron density, were evaluated to correlate enhancement behavior with predictive accuracy. The results demonstrate that coupling plasmonic enhancement with multivariate spectral modeling enables reliable and sensitive AuNP quantification.

        This work establishes NELIBS combined with PLSR as a robust analytical
        platform for nanoparticle concentration sensing and provides insight into the interplay between plasmonic field enhancement, excitation wavelength, and quantitative plasma spectroscopy.

        Speaker: Morteza Khalaji (University of Alberta)
    • DQI Poster Session & Student Poster Competition | Session d'affiches DIQ et concours d'affiches étudiantes
      • 325
        Round-Trip-Free Distributed Quantum Computation via Photonic Graph-State Stitching

        Distributed quantum computation is currently constrained by the latency of probabilistic inter-module entanglement. Traditional architectures rely on sequential teleportation, where the total runtime scales with the sum of geometric waiting times for each successful Bell-pair generation. In this work, we propose "stitched measurement-based quantum computation" (stitched MBQC) as a deterministic alternative that parallelizes these probabilistic events.

        Rather than establishing links gate-by-gate, our protocol prepares local photonic graph states within each module and "stitches" them at the boundaries using parallel photonic Bell-state measurements (BSMs). We derive the stabilizer formalism showing that a successful BSM on emitted photons projects the boundary qubits into a joint stabilizer state, effectively adding a graph edge between modules without requiring round-trip signaling. Once a sufficient number of edges are established, the modules form a single distributed cluster state, allowing a full layer of cross-module gates to be executed deterministically via single-qubit measurements.

        This approach transforms the latency scaling from a sum of geometric random variables to a single negative-binomial distribution, significantly thinning the heavy tail associated with entanglement attempts. Using custom MBQC gadgets designed for Grover’s search and QAOA, we demonstrate through simulation that stitching reduces expected latency by a factor of 3–6$\times$ compared to standard teleportation.

        Speaker: Dhyan Baruah (University of New Brunswick, Department of Electrical and Computer Engineering)
      • 326
        Simultaneous Amplitude and Phase Spectroscopy using Two-Photon Interference

        Spectroscopy is routinely applied to measurement of photosensitive samples such as single molecules, quantum emitters (such as quantum dots), and perovskite solar cells. These samples are irreversibly altered when exposed to high optical powers, which necessitates the use of probe beams low photon fluxes. In this regime, measurement statistics are dominated by shot noise, which fundamentally limits the possible measurement precision. Recent advances in quantum optics have led to the use of heralded single photons for absorption spectroscopy, allowing for sub-shot noise sensitivity. At the same time, it is often desirable to measure the spectral phase imparted by the sample, which characterizes time-dependent processes.

        Our group proposes, and has experimentally demonstrated, a new quantum spectroscopy technique allowing for simultaneous absorption and phase measurements at low photon fluxes. The technique relies on Hong-Ou-Mandel interference between a probe photon and a herald photon, which results in a phase-sensitive interference pattern. At the same time, heralding statistics can be measured through the number of coincidence and single detections. Theoretical calculations show that these heralding statistics can be used for a sub-shot noise absorption measurement, similar to previous techniques. Our method therefore allows one to obtain additional information about the phase shifts of a sample without sacrificing quantum-enhanced absorption sensitivity.

        We report on a proof-of-principle experiment demonstrating this technique. Using a time-tagging biphoton spectrometer, we simultaneously measure the frequency of photons as well as their bunching statistics. This allows for a measurement of the spectrally-resolved Hong-Ou-Mandel interferogram, from which we estimate the absorption and phase spectra of a dye sample. Our experiment features a large 150 nm bandwidth and few-minute exposure times, making our technique a viable practical tool for low-flux spectroscopic measurements.

        Speaker: Kyle Jordan (National Research Council Canada)
      • 327
        Towards Electrostatistically Enabled Single-Photon Emitters in WSe₂ Using Graphite Nanopore Gating

        Single-photon emitters (SPEs) are fundamental for optical-based quantum technologies [1], however, high quality and practically usable SPEs require properties that are hard to realize in the lab. Ideal SPEs must have high purity, produce indistinguishable photons with high brightness and must be easily reproduceable and scalable. A potential avenue to achieve these properties is to use two-dimensional (2D) materials for this purpose. Recently, it has been shown that 2D-materials like hexagonal boron nitride (hBN) and transition metal dichalcogenides (TMDs) can host SPEs with the desired properties. Using defects by high temperature annealing and focused ion beam (FIB) irradiation, hBN produced high-purity single photons at room temperature [2]. In parallel, combining strain via nanoscale stressors and defects with electron beam irradiation also showed high purity single photon emission in tungsten diselenide (WSe2) [3].

        In this work, we aim to deterministically create scalable SPEs. To do so, we electrostatically define them in monolayer WSe₂ using nanopatterned graphite screening gates. A graphite flake patterned with periodic 30 nm nanoholes is used to screen the electric field from a global back gate, allowing manipulation of the field at the WSe₂ layer and the creation of local minima that could trap charges over that 30nm diameter and activate emitters. After their fabrication, proper characterization techniques such as photoluminescence spectroscopy and Hanbury Brown Twiss interferometry will be used on our samples, both at room and cryogenic temperatures.

        1- Sunny Gupta, Wenjing Wu, Shengxi Huang, and Boris I. Yakobson. The Journal of Physical Chemistry Letters 2023 14 (13), 3274-3284 DOI: 10.1021/acs.jpclett.2c03674

        2- Grosso, G., Moon, H., Lienhard, B. et al. Tunable and high-purity room temperature single-photon emission from atomic defects in hexagonal boron nitride. Nat Commun 8, 705 (2017). https://doi.org/10.1038/s41467-017-00810-2

        3- Parto, K., Azzam, S.I., Banerjee, K. et al. Defect and strain engineering of monolayer WSe2 enables site-controlled single-photon emission up to 150 K. Nat Commun 12, 3585 (2021). https://doi.org/10.1038/s41467-021-23709-5

        Speaker: Mohamed Amine Kabraoui (University of Ottawa)
      • 328
        Overcoming environmentally-induced timing instability in time-bin quantum communication over fibre

        Time-bin encoding is a promising platform for long-distance, optical fibre-based quantum key distribution (QKD) because it can support high communication rates and is robust to polarization instability which is present in commercial single-mode fibres. Practical time-bin encoding requires the bin separation time to be small enough to maintain sufficient phase stability in interferometric measurement schemes, while being large enough for the bins to still be distinguishable.

        The use of ultrafast optics and fast, low-jitter detectors can enable the preparation and measurement of time-bins separated by only a few picoseconds. However, an optical fibre deployed in a real-world environment may be exposed to fluctuating temperatures, which in turn results in a fluctuating effective refractive index due to thermal expansion. This fluctuation can be large enough to create ambiguity in whether a photon is detected in a “early” or “late” time-bin. Disambiguating the time-bins requires the use of a suitable clock or signal that can maintain a consistent time delay relative to the time-bin states, regardless of absolute fluctuations in the fibre effective index.

        In this work, we propose the use of wavelength-division multiplexing in a single-mode fibre to co-propagate O-band time-bin states with a C-band optical pulse train acting as a reference clock. In our setup, a 1040 nm femtosecond pulsed laser is used to generate linearly-polarized photons at 1345 nm via spontaneous four-wave mixing in a birefringent fibre; these photons are subsequently converted into corresponding time-bin states by inducing a polarization-dependent delay using a birefringent crystal (α-BBO). The laser simultaneously pumps an optical parametric oscillator which generates pulses at 1550 nm synchronously with the 1345 nm photons. Both fields are then co-propagated through a 10-km spool of SMF-28 fibre. The two fields are demultiplexed after the fibre, where the 1550 nm pulses are used to trigger/gate the fast detectors used to measure the 1345 nm time-bin states. We characterize the stability of the relative signal-clock delay under real-world conditions by varying the temperature of the fibre spool, and assess the feasibility of using this procedure for synchronization in time-bin QKD.

        Speaker: Mr Timothy Lee (University of Ottawa)
    • DSS Poster Session | Session d'affiches DSS
      • 329
        Individual silicon atom abstraction from Si(100) enabled by mechanosynthesis and inverted-mode scanning tunneling microscopy

        Atomically precise manipulation of individual, covalently bonded atoms has been a difficult problem for both bottom-up and top-down approaches in nanotechnology. In this work, we employ our recently developed inverted-mode scanning tunneling microscopy (IM-STM) as a platform for atomically precise mechanosynthesis [1]. In our chosen application, individual Si atoms can be routinely abstracted from a Si(100) crystalline surface with sub-Ångström resolution by piezo-actuator controlled mechanical interaction with individual “molecular tools”, which serve both as local reagents for chemical modification of the Si probe and as in situ product-imaging molecules for IM-STM. We highlight the use of one such newly synthesized tripodal molecular tool: MAOC-C2I, for highly reproducible generation of individual and patterned Si vacancies and divacancies, as well as subsequent targeted reconstructions of the Si surface. The terminal ethynyl iodide (-C2I) functional group of the covalently surface-bound MAOC-C2I, once “activated” in situ with applied bias pulses, presents a –C2* radical moiety anchored by a C bridgehead atom to bond with and mechanically abstract the target Si atom from a Si(100)-2x1 dimer pair. Our results demonstrate new routes to patterning the Si(100) surface as a potential complementary technique to hydrogen depassivation lithography, highlighting additional capabilities towards atomically precise fabrication. These advances show promise for further developments in mechanosynthesis-enabled nanotechnologies with the highly flexible chemical platform afforded by our molecular tool design [2].

        1. E. Barrera et al., “Inverted-mode scanning tunneling microscopy for atomically precise fabrication,” arXiv:2512.24431 [cond-mat.mes-hall] (2025), https://doi.org/10.48550/arxiv.2512.24431 (submitted for peer review). 
        2. T. Huff et al., “Molecular tools for non-planar surface chemistry,” arXiv:2508.16798 [cond-mat.mtrl-sci] (2025), https://doi.org/10.48550/arXiv.2508.16798 (submitted for peer review).
        Speaker: Zehra Ahmed (CBN Nano Technologies Inc.)
      • 330
        Deterministic Placement of C2H Molecules on Si(100) Using Inverted-Mode STM

        Here, we present an atomically precise fabrication (APF) approach based on bottom-up, building of covalently bonded structures on surfaces through the addition, abstraction, and manipulation of individual atoms or small few-atom moieties/functional groups. Our approach to mechanosynthetic APF is enabled by Inverted-Mode Scanning Tunneling Microscopy (IM-STM), a technique that uses tailored 3D molecules deposited on a sample to scan and react with a flat, crystalline silicon probe, enabling reagent transfer to the probe apex with sub-angstrom precision. This capability opens potential pathways for deterministic placement of qubit structures at specific atomic sites and orientations, as well as precise surface patterning or modification of materials.
        We demonstrate the site- and orientation-specific formation of C2H on a Si(100) probe apex through IM-STM. The probe interacted with custom-synthesized 3D molecules (MAOGe-C2H) presenting an upright C2H moiety, or “feedstock”, which have relevance for quantum applications as the main constituent in T centres. By precisely moving the probe, the feedstock can be positioned at a specific silicon dimer on the probe, and a tailored X-Y-Z trajectory can be leveraged to transfer C2H in the desired configuration and orientation.
        Through the choice of trajectory, C2H could be transferred to a single dimer or between two dimers, with success rates of 87 ± 4% and 72 ± 1%, respectively. Simulated STM images using Density Functional Theory showed strong agreement with experiments, providing high confidence in the proposed atomic configurations. Structures containing numerous C2H units were consistently fabricated, with controlled molecule orientation in each transfer.
        Our ability to reproducibly control the position and orientation of individual molecules during transfer via mechanosynthesis represents a significant advance in APF. These results open new avenues for future development of complex, functionally relevant structures at the atomic scale and could also inform future efforts in qubit integration, T centre fabrication, and atomically precise surface engineering.

        Speaker: Jacqueline Kort-Mascort (CBN Nano Technologies Inc.)
      • 331
        Force to Spin a Molecular Rotor

        Frequency-modulated atomic force microscopy (fm-AFM) is well suited to probe forces between the tip and surface. Vertical forces can be inferred by sampling the frequency shifts while varying the tip-sample distance above a point on the sample. In our work, we explore a custom-synthesized molecule: tetrakis(iodomethyl)germane (Ge(CH2I)4; TIMe-Ge), which consistently presents a near-normal-facing CH2I group upon deposition on depassivated Si(100) surface. At 4K, the normal-facing CH2I group remains stationary in one of three energy-minimized rotamers. The group can be manipulated into each of the three rotamers by using the attractive force between the tip and the CH2I.

        We employ the methodology used by Ternes, et al.[1] to measure the horizontal force required to switch between rotamers. In this work, we iteratively took frequency shift measurements along lines parallel to the sample surface at sequentially smaller tip sample separations. The line direction is off-center from the TIMe-Ge molecule so that the normal-facing CH2I group is manipulated into a new rotamer. The frequency shift data can then be used to determine the potential energy landscape at each point along this line. By taking a partial derivative along the direction of the horizontal line, horizontal forces can be inferred at each point, including the point at which the CH2I group rotates. By demonstrating the ability to measure this force for a single bond, this work highlights the utility of fm-AFM as a technique to investigate molecular rotors and other machines.

        ​​1. Ternes, M., Lutz, C. P., Hirjibehedin, C. F., Giessibl, F. J. & Heinrich, A. J. The Force Needed to Move an Atom on a Surface. Science 319, 1066–1069 (2008).​

        Speaker: Henry Rodriguez (CBN Nano Technologies Inc.)
      • 332
        Calcium Pipe Diffusion Along Dislocations in MgO(100) Single Crystal

        In surface science, MgO is attractive due to its highly stable, atomically flat (100) vicinal surface for epitaxial growth. Annealing is commonly used to form well-defined terraces on MgO single-crystal substrates. In MgO crystals, however, impurities can redistribute during thermal treatment; calcium is commonly reported as one of the most prevalent impurities in MgO single crystals and is known to segregate to the surface upon annealing. It is also reported that mechanical surface polishing may increase CaO segregation and it would be linked to the formation of dislocations on the surface by polishing. As dislocations further enhance segregation attracting impurities such as Ca, promoting accumulation along dislocation lines and at surface.
        To understand the segregation mechanism, MgO(100) single crystals were annealed at 1100 °C under 80 sccm O2 flow for different annealing time. Annealed samples characterized by terraces separated by atomic steps and a secondary phase distinct from the bulk MgO. AFM phase images showing a phase contrast of ~20°, which corresponds to the existence of a different material om the MgO single crystal surface. TEM elemental mapping identify them as CaO particles. TEM images also provide insight into the underlying lattice structure, a high density of dislocations below the MgO surface becomes evident. These dislocations are concentrated near the segregated CaO particles, suggesting a strong interaction between lattice defects and surface impurities. Ca atoms migrate to the surface along dislocation cores, forming CaO particles that act as pinning centers at the edges of MgO terraces. In some of these pinned regions, step bunching is observed, which serves as further evidence for the presence of dislocations beneath the CaO particles. This behavior also demonstrates the role of pipe diffusion in facilitating the transport of Ca to the surface, highlighting the strong coupling between dislocation-mediated diffusion and surface morphology evolution.

        Speaker: Ms Elnaz Familsatarian (Institut national de la recherche scientifique, centre Énergie, Matériaux, Télécommunications (INRS-EMT) 1650 blvd Lionel Boulet, J3X 1P7 Varennes, QC, Canada)
    • DTP Poster Session & Student Poster Competition | Session d'affiches DPT et concours d'affiches étudiantes
      • 333
        Temporal Entanglement: Two Particles or Two Observation Points?

        In a novel relational time model, a single temporal node maps multiple spatial points onto indeterminate locations. One observer’s measurement selects a deterministic spatial outcome, forming a bound relational state; a second observer independently does the same at the same temporal anchor.

        Correlations arise not from particle interactions but from referencing the same relational instant of now. This “temporal entanglement” replaces particle linkage with observer relations, explaining nonlocal quantum correlations as features of time itself.

        Exploration of this concept offers opportunity for further theoretical development.

        Speaker: Dr Steven Moore
      • 334
        Machine Learning Frameworks for Magnetohydrodynamics: Integrating Physics-Informed and Data-Driven Methods

        The gravitational collapse of magnetized astrophysical fluids underlies a wide range of phenomena, from protostellar formation to large-scale structure evolution. These systems are governed by magnetohydrodynamic (MHD) equations, a nonlinear and tightly coupled set of partial differential equations that pose significant challenges for traditional numerical solvers. High-resolution finite-volume methods accurately capture shocks and discontinuities but scale poorly as spatial and temporal resolution increases, especially in multi-dimensional simulations.
        This work investigates Physics-Informed Neural Networks (PINNs) as a complementary framework for solving well-established MHD benchmark problems without conventional time-marching schemes. Three representative cases are considered. In one dimension, PINNs are applied to the Brio–Wu shock tube and the slowly moving shock variant to test recovery of discontinuous solutions governed by the ideal MHD equations. In two dimensions, the method is extended to gravitationally stratified MHD systems exhibiting magnetic buoyancy, including the Parker instability. As a third component, PINNs are explored as solvers for the elliptic Poisson equation governing the gravitational potential of a thin disk, using established polar-coordinate and convolution-based formulations as reference solutions.
        In each case, the governing equations are embedded directly into the network loss function, and the physical variables are represented by continuous neural approximations. The results are compared with numerical solutions and purely data-driven models to evaluate accuracy and stability in shock-dominated flows, instability evolution, and elliptic gravitational field calculations. These comparisons provide a quantitative assessment of PINN performance on classical MHD benchmark problems relative to conventional numerical approaches, highlighting both their capabilities and current limitations.

        Speaker: Ms Malavika Nair (Western University)
      • 335
        Testing ER = EPR with Hydrogen

        According to the ER = EPR conjecture, entangled particles are connected by quantum wormholes. Under the assumption that some of the electric field surrounding an entangled charged particle leaks into the wormhole, we show that this effect will modify the hyperfine structure of the Hydrogen atom. In addition, if the quantum wormholes are nontraversable, this will also lead to a nonzero total effective charge for the Hydrogen atom. These effects provide strong constraints on the amplitude of this potential ER = EPR effect, given high-precision measurements of the Hydrogen atom's hyperfine structure and total charge.

        Speaker: Irfan Javed (University of New Brunswick)
      • 336
        Active deep learning methods for regression

        Active learning (AL) is under-researched for regression problems, with a majority of publications focusing on classification tasks. There is also a need for research concerning AL methods for deep learning. These methods are often approximations by necessity, as AL requires the models to not only predict current performance, but also changes in future performance. Deep learning methods do not directly provide this information.

           We investigate recent methods of AL data selection for regression including variance-based query-by-committee, Fisher information matrices, and feature-based largest cluster maximum distance selection. Within the query-by-committee methods, we compare two dropout pseudo-ensemble committee frameworks as well as a true ensemble setup. We test these methods for synthetic regression datasets that increase in complexity to recommend best use cases. We also use our findings of the methods' relative strengths and weaknesses to determine an optimal method for a high-dimensional nanophotonics inverse design project.
        
        Speaker: Kieran Watts (University of Ottawa)
      • 337
        Thermodynamics of Polyhedral Spin Ice

        Artificial spin ices (ASIs) are nanostructured arrays of ferromagnetic elements designed to study frustration and emergent phenomena. ASI studies have largely been confined to two-dimensional lattice geometries. A recent study has extended ASIs to three dimensions, with a buckyball-shaped arrangement of nanomagnets. Inspired by this study, we investigate ASIs in various polyhedral geometries: the five Platonic solids (tetrahedron, cube, octahedron, dodecahedron, icosahedron) and one particular Archimedean solid, the cuboctahedron. Each polyhedron is modelled as a collection of bar magnets along its edges. Due to strong shape anisotropy, the magnets are Ising-like with two possible orientations. We enumerate all possible magnetic configurations and calculate their energies from dipolar interactions. Using the canonical ensemble approach, we study their properties as a function of temperature. We see two properties that ‘emerge’ at low temperatures: (i) the ice-rule constraint and (ii) chirality on polygonal faces. Our results may guide experimental studies on ASI construction, thermal properties and dynamics.

        Speaker: Lindsay Tait (Brock University)
    • PPD Poster Session & Student Poster Competition | Session d'affiches PPD et concours d'affiches étudiantes
      • 338
        Modelling and Experimental Design for Directional Sensitivity in the NEWS-G Dark Matter Experiment

        Dark matter (DM) constitutes the majority of the universe’s mass but remains undetected on Earth. The New Experiments with Spheres - Gas (NEWS-G) experiment is designed to directly detect low-mass dark matter candidates using a spherical proportional counter filled with light noble gases, enabling sensitivity to single electrons. At these low energies, the coherent elastic scattering of solar neutrinos (CE𝜈NS) will pose an irreducible background, creating the so-called solar neutrino floor, limiting conventional searches. Directional sensitivity offers a powerful strategy to discriminate between dark-matter-induced events and solar neutrino backgrounds, as these signals originate from distinct directions.
        In this presentation, I will present advanced computational and modelling tools developed for reconstructing particle direction in the NEWS-G detector, leveraging its 11-anode sensor geometry. I will explain how spatial and temporal distributions of charge collected on multiple anodes are used to infer the trajectories of ionizing particles. Additionally, I will introduce machine learning techniques for event direction reconstruction based on detector observables, and discuss performance results that help identify optimal detector configurations for directional sensitivity. This simulation and computational work will establish a robust framework, enabling precise comparisons with experimental data to go beyond the solar neutrino floor with the NEWS-G experiment.

        Speaker: Keiran Nicholson (University of Alberta)
      • 339
        Mind the Gap: Temperature Separation in Two-Step Electroweak Breaking

        Electroweak phase transitions in the early Universe are widely studied because they can provide the out-of-equilibrium conditions needed for baryogenesis. We consider a two-field scenario where an extra real scalar field is coupled to the Higgs field, and the thermal history happens in two steps: first, the scalar field develops a non-zero vacuum expectation value while the Higgs vacuum expectation value stays zero; later, the roles switch so that the scalar vacuum expectation value goes back to zero and the Higgs vacuum expectation value becomes non-zero, completing the electroweak transition. Our main goal is to measure how far apart these two transitions can be in temperature, i.e., how large the temperature gap can be between the onset of the scalar phase and the nucleation of the Higgs transition. We focus on transitions in the electroweak range from roughly 10 GeV up to 1 TeV, and discuss how making the two steps more separated can affect a baryogenesis scenario.

        Speaker: Kimia Ghanaatpisheh (Carleton University)
      • 340
        SuperCDMS Programmable Logic Controller System

        SuperCDMS is searching for dark matter in the sub 10 GeV region using silicon and germanium detectors cooled to <20mK. The cooling is done with a suite of cryocoolers, including 2 Gifford-McMahon coolers, 3 pulse tube coolers, and one dilution refrigerator. For cryogenic control SuperCDMS relies on a large number of thermometers, heaters, pressure transducers, and valves to control and monitor both the cryogenic stages and the lab environment. In order to control and read out these devices we created a Programmable Logic Control (PLC) system. The system collects data from the sensors using a large array of EtherCAT modules from Beckhoff as well as several Arduinos and smaller computers. We use this data for our monitoring and alarms system. For monitoring the status of the experiment we’ve created a web page for members of the collaboration to access that displays the readings of each of our sensors. For the alarms we have scripts set up to send alerts to designated shifters when a sensor goes out of its expected range or part of the monitoring system itself stops working. The PLC system is also used to control important components using a TwinCAT interface also from Beckhoff. In this poster I will describe the technical design of this PLC system as well as the work we have done to make this system functional.

        Speaker: Cassandra Harms (University of Toronto)
      • 341
        Applying Machine Learning Techniques to the PIONEER Experiment’s Active Target

        The PIONEER experiment is a next-generation investigation into rare pion decays, aimed at probing beyond the Standard Model (SM) physics by accurately measuring the charged-pion branching ratio of electrons versus muons Re/$\mu$ - providing a sensitive test of Lepton Flavour Universality. The SM provides a calculation of Re/$\mu$ at the 0.01% level, a factor 15 times better than the most precise experimental measurement. The PIONEER experiment aims at closing the precision gap between theory and experiment, with the possibility of revealing new physics at the PeV scale. PIONEER has been proposed and approved with high priority at the Paul Scherrer Institute in Switzerland.

        This poster will focus on applying machine learning (ML) techniques to PIONEER’s active target (ATAR). The ATAR is a central piece of the detector and a key enhancement over earlier experiments measuring Re/$\mu$. It is composed of 48 successive planes of 120$\mu$m thin low-gain avalanche diodes (LGADs). This technology allows 5-dimensional event reconstruction providing precise timing, energy, and three-dimensional spatial information. ML offers an attractive alternative to traditional reconstruction and analysis approaches for pattern recognition. This work explores the integration of a neural-network architecture based on transformers into the ATAR reconstruction pipeline, and investigates its performance and potential for particle identification and tracking.

        Speaker: Meghan Naar (TRIUMF)
      • 342
        Dynamical Signatures of Dark Matter Halo Rotation in Tidal Streams

        This study investigates the impact of dark matter halo rotation on satellite orbital decay rates and tidal stream formation, with a focus on the role of dynamical friction and energy loss. Using N-body simulations and analytical models, the effects of varying halo spin isotropies on satellite motion and the resulting tidal streams are examined. The results show that co-rotation with the satellite’s orbit reduces the orbital decay rate, while counterrotation accelerates it. The strength of rotation is directly related to the intensity of dynamical friction, with stronger rotations leading to greater changes in the rate of energy loss. These findings suggest that tidal stream dynamics are sensitive to the rotational properties of dark matter halos, and indicate that tidal streams can serve as effective tracers of halo rotation. This highlights their potential as a tool for probing the dynamical state of dark matter halos.

        Speaker: Devon Leskiw (Carleton University)
      • 343
        To B, or Not to B: Is B-Meson Entanglement Measurable?

        Entangled quantum systems violate Bell's inequality. Quantum entanglement has been observed in photon systems through Bell's inequality test. Particle physicists have been interested in testing Bell's inequality at various particle colliders, e.g. through top quark pairs at the LHC, providing a new realm of entangled systems at much higher energies than previously observed. One such entangled system is neutral B meson pairs at B-factories. Belle Collaboration reported an observation of Bell's inequality violation in entangled B mesons [Go, 2003]. However, later theoretical work [Bramon etal, 2005] raised concerns about these results and claimed that Bell's inequality tests cannot be performed at particle colliders. We analyze the origin of this discrepancy in the literature by comparing the theoretical assumptions used in the two approaches.

        Speaker: Xiaoqing Wu (Carleton University)
      • 344
        Precision spectroscopy of rare-earth nuclei to search for T-violation

        Astrophysical observations suggest that the Standard Model of particle physics is incomplete. The abundance of matter over antimatter in the universe requires new physics that violates time-reversal (T) symmetry. Moreover, axionlike particles, a well-motivated dark matter candidate, produce oscillating T-violation. Gluon couplings to new particles or fields lead to T-violating nuclear moments, which can be measured through their effect on the hyperfine structure of atomic ions in a crystal. In our experiment, we use optical transitions to measure these effects. To perform the precision spectroscopy, we have developed a stable laser system and an optical cryostat. In this poster we present the experimental setup and discuss recent measurements.

        Speaker: Florian Brandstatter (University of Toronto)
      • 345
        Deep Learning Based Pseudo-CT Generation for Attenuation Correction in Standalone Brain PET

        Positron emission tomography (PET) is a molecular imaging technique that detects gamma photons from an injected radiotracer to map biological activity in the body. Accurate attenuation correction provided by computed tomography (CT) maps is important for PET to increase image quality, but standalone PET scanners don’t have a co-registered CT map which limits the correction quality and decreases clinical interpretability. This project’s goal is to develop a CT free attenuation correction workflow for dedicated brain PET by synthesizing pseudo-CT images from the PET system and then converting them into attenuation maps (μ-maps). A deep learning based Dual-Stage Generative Adversarial Network (DSG-GAN) model was trained on paired brain PET/CT data to learn the PET to CT translation, after which the synthetic CT outputs were converted to μ-maps using tissue segmentation and HU-to-attenuation conversion with post-processing steps (threshold tuning, bilinear scaling, and intensity normalization) to improve anatomical quality. A template-based pipeline was used to support hybrid solutions, including affine PET-to-PET registration to template space and deformation of template μ-maps using the same transform. Quantitative evaluation tools, including Structural Similarity Index (SSIM), Peak Signal-to-Noise Ratio (PSNR), and voxel-wise error mapping. Initial affine registration tests were used to evaluate the μ-maps. Preliminary work on a reduced subset produced promising synthetic CT-derived μ-maps, with next steps including training on the deep learning model on the full PET/CT dataset and developing comprehensive benchmarking of attenuation-corrected reconstructions.

        Speaker: Grace Zhang (University of Western Ontario)
      • 346
        Non thermal dark matter produced just after inflation

        Astronomical observations indicate that dark matter exists, but what it is and how it is produced remains unsolved. We consider the possibility that dark matter is made up of a sea of cold degenerate fermions. Such particles may be produced out of thermal equilibrium in the very early universe, just after inflation. Production happens through the non-perturbative mechanism of fermionic preheating, during coherent oscillations of the inflaton field. The production mechanism is described by numerical solutions of an oscillator-like differential equation that has a natural frequency represented by $\Omega_{\kappa}$ (having momentum $\kappa$) and depends on a parameter $q$ that quantifies how strongly the inflaton field couples with the fermions. We find that the actual momentum spectrum of the fermions produced by this mechanism is not degenerate. For small $q$ ($\lesssim 0.01$), instead of a momentum sphere, the major contributions to the total number density of fermions come from momentum shells which correspond to resonance peaks in the momentum distribution. For larger $q$, we observe a major contribution from an approximately half-filled sphere and sub-dominant contributions from the resonance peaks. We find a simple semi-analytic relation using the average of $\Omega_{\kappa}$ over time that predicts the momentum values where resonance peaks are present without the need to numerically solve a differential equation. We then obtain analytic power-law approximations for the total number density of fermions, accounting for contributions from the full momentum distribution. We find good approximations in two regimes: the total number density is proportional to $q^{1/2}$ for $q\lesssim 0.01$ and proportional to $q^{3/4}$ for $q\gtrsim 10$.

        Speaker: Fazlul Yasin (Carleton University Ottawa)
    • 19:45
      CJP Editorial Board Dinner | Souper du comité de rédaction de la RCP (location will be told to Ed.Bd. by CJP)

      (location will be told to Ed.Bd. by CJP)

    • 19:45
      Departmental Leaders Meeting (by invitation only) / Réunion des directeurs(directrices) de département (sur invitation seulement)

      Chaired by Chitra Rangan (University of Windsor), CAP Director of Academics Affairs

    • 19:45
      Open Forum with Student Advisory Council followed by possible student networking event at another venue | Forum ouvert avec le conseil consultatif des étudiants, suivi d'un éventuel événement de mise en réseau des étudiants dans un autre lieu.
    • 07:30
      Congress Registration and Information (07h30-17h00) | Inscription au Congrès et information (07h30-17h00) Ramp area (Arts Bldg., U.Sask.)

      Ramp area

      Arts Bldg., U.Sask.

    • 08:45
      Plenary hall opens | Ouverture de la salle plénière
    • W-PLEN1 NSERC Community Update Plenary Session | Session plénière - Mise à jour de la communauté par le CRSNG
    • 09:45
      Health Break with Exhibitors | Pause santé avec les exposants
    • W1-1 Future Particle Physics Energy Frontier Facilities | Installations futures à la pointe de la physique des particules

      With the discovery of the Higgs boson the Standard Model is now complete. Yet key questions remain unanswered: the matter–antimatter asymmetry, the nature of dark matter and the origin of neutrino masses. Addressing these requires physics beyond the Standard Model and motivates intensified collider exploration. This symposium will bring together researchers working on new facilities that will enable exploration of the energy frontier. Discussion topics will include theory motivations, experimental developments, and accelerator technologies that enable the next generation of experiments.

      A portion of the symposium will be dedicated to convene Canadian researchers interested in learning about or contributing to the Future Circular Collider (FCC). This new accelerator is designed to extend our reach with unprecedented sensitivity, precision, and energy beyond the TeV scale. If pursued by CERN and Europe, 2028–2034 will be a critical period for building collaborations and developing the required detector technologies. The goal is to engage researchers in material science, computational physics, instrumentation, accelerator technology and high energy theory and experiment in shaping a new effort that could be a major part of Canada’s subatomic physics community.

      Avec la découverte du boson de Higgs, le Modèle standard est désormais complet. Cependant, des questions fondamentales restent sans réponse : l'asymétrie matière-antimatière, la nature de la matière noire et l'origine des masses des neutrinos. Pour y répondre, il faut aller au-delà du Modèle standard et intensifier l'exploration des collisionneurs. Ce symposium réunira des chercheurs travaillant sur de nouvelles installations qui permettront d'explorer les frontières de l'énergie. Les thèmes abordés comprendront les motivations théoriques, les développements expérimentaux et les technologies d'accélérateurs qui permettront la prochaine génération d'expériences.

      Une partie du symposium sera consacrée à réunir les chercheurs canadiens intéressés à en savoir plus sur le Future Circular Collider (FCC) ou à y contribuer. Ce nouvel accélérateur est conçu pour étendre notre portée avec une sensibilité, une précision et une énergie sans précédent au-delà de l'échelle TeV. Si le CERN et l'Europe poursuivent ce projet, la période 2028-2034 sera cruciale pour établir des collaborations et développer les technologies de détection nécessaires. L'objectif est d'impliquer des chercheurs en science des matériaux, en physique computationnelle, en instrumentation, en technologie des accélérateurs et en théorie et expérimentation des hautes énergies dans la mise en place d'un nouvel effort qui pourrait constituer un élément majeur de la communauté canadienne de physique subatomique.

    • (DAMOPC) W1-10 | (DPAMPC)
      • 347
        Optimized Laser Driving of Bright. Multiplexed Solid-State Quantum Emitters

        Sources of single and entangled photons are needed for many areas of quantum photonics, including quantum computation, cryptography, and sensing. Solid-state emitters that may be triggered on-demand are especially promising for long-term scalability and integration with classical communication hardware. Over the past decade, our research group has been developing laser triggering schemes for quantum emitters that ultilize pulse shaping to engineer the light-matter interaction [1-5]. By exploiting amplitude and phase control, we have implemented triggering protocols that are robust to variations in the laser pulse parameters and the optical properties of the emitters themselves [1,3], facilitating commercial implementation using solid-state systems. We have also shown that shaping eases the technical complexity associated with multiplexing in quantum optical systems [5,6] and enables several performance metrics of quantum emitters (brightness, indistinguishability, purity) to be optimized simultaneously [4]. In this presentation, I will highlight our recent experiments pursuing the implementation of our triggering scheme Notch-filtered Adiabatic Rapid Passage (NARP) [4] in single semiconductor quantum dots, including InGaAs quantum dots in planar heterostructures, and InAsP quantum dots in nanowire waveguides.
        [1] R. Mathew et al., “Subpicosecond adiabatic rapid passage on a single semiconductor quantum dot: phonon-mediated dephasing in the strong-driving regime”, Phys. Rev. B 90, 035316, 2014.
        [2] A. Ramachandran et al., Suppression of decoherence tied to electron–phonon coupling in telecom-compatible quantum dots: low-threshold reappearance regime for quantum state inversion, Optics Letters, 45, 6498 (2020).
        [3] A. Ramachandran et al., Experimental quantification of the robustness of adiabatic rapid passage for quantum state inversion in semiconductor quantum dots. Optics Express, 29, 41766 (2021).
        [4] G. R. Wilbur et al., “Notch-filtered Adiabatic Rapid Passage for optically driven quantum light sources”, APL Photonics 7, 111302, 2022.
        [5] A. Ramachandran et al., “Robust parallel driving of quantum dots for multiplexing of quantum light sources”, Sci. Rep. 14:5356, 2024.
        [6] A. Binai Motlagh, et al., “Multi-NARP Laser Driving Scheme for Multiplexed Quantum Networks”, Optica Quantum 3, 461 (2025).

        Speaker: Prof. Kimberley Hall (Dalhousie University)
      • 348
        Many-body quantum physics with an optical centrifuge: spinning particles and quasiparticles in superfluid helium

        We use ultrashort laser pulses—including a unique tool known as an optical centrifuge—to coherently control and probe ultrafast, nonequilibrium many-body dynamics in superfluid helium. In this talk, I will present two complementary approaches, recently developed in my group, that exploit strong-field and ultrafast laser techniques to gain new microscopic insight into the remarkable quantum phenomenon of superfluidity.

        In the first approach, we use the optical centrifuge to spin up molecules dissolved in liquid helium and investigate the decay of their rotation due to interactions between the molecular rotor and the surrounding quantum bath [1]. The ability to precisely control the rotational frequency of the molecule using tailored laser fields provides a powerful handle for studying how angular momentum and energy are exchanged between a single quantum particle and a many-body environment.

        In the second approach, we exploit the high peak intensity of femtosecond laser pulses to coherently launch collective excitations known as rotons. By tracking the nonequilibrium many-body dynamics of these mysterious quasiparticles on a picosecond timescale, we probe the ultrafast response of the superfluid and explore the microscopic origin of its collective behavior from a complementary, previously inaccessible, viewpoint [2].

        Taken together, the study of these two objects—laser-driven single molecular rotors and optically generated collective excitations involving many helium atoms—brings us closer to a better microscopic understanding of superfluidity.

        1. MacPhail-Bartley I., Milner A. A., Stienkemeier F., Milner V. "Control of molecular rotation in helium nanodroplets with an optical centrifuge. Phys. Rev. Lett, in press (2026).
        2. Milner A. A., Stamp P. C. E., Milner V., “Ultrafast non-equilibrium dynamics of rotons in superfluid helium,” PNAS, 120, e2303231120 (2023).
        Speaker: Valery Milner (UBC)
      • 349
        Simulating topological states of matter in atoms

        Over the past 30 years, advancements in laser and cooling technology have led to unprecedented control over atomic, molecular, and optical (AMO) systems at the level of single quanta. These innovations have transformed AMO systems into powerful and versatile tools for a wide range of applications, particularly in the field of quantum simulation. In this talk, I will focus on the simulation of topological states of matter. I will draw distinctions between the currently popular approach of using neutral atoms in optical lattices and my own research, which involves replacing spatial degrees of freedom in these simulations with internal atomic states such as spin. New phenomena, such as movable topologically protected states, will be highlighted, and I will conclude by discussing the possibility of such states existing naturally within atoms without the need for engineering.

        Speaker: Jesse Mumford (University of Victoria)
    • (DTP) W1-11 | (DPT)
      • 350
        xCPS: A Computational Toolkit for Covariant Phase Space Methods

        The covariant phase space (CPS) formalism provides a geometric framework to derive equations of motion and endow the space of solutions with a symplectic structure. This variational approach additionally provides Noether charges and plays a central role in modern gravitational physics and gauge theories. Despite its conceptual elegance, explicit computations in physically relevant models (particularly in gravitational theories with higher-derivative or non-minimal couplings) can become technically demanding and error-prone.

        In this talk, after a brief introduction to the the CPS formalism, I will present xCPS, a Mathematica package designed to automate the covariant phase space formalism for general Lagrangian field theories. The package computes equations of motion, presymplectic potentials, symplectic currents, Noether currents and charges, and related geometric structures in a fully covariant manner. I will conclude with some interesting examples.

        Speaker: Juan Margalef (Université de Montréal)
      • 351
        Bosonization of Noise Effects on Nonlocal Open Quantum Dynamics

        Quantum systems that interact non-locally with an environment are paradigms for exploring collective phenomena. They naturally emerge in various physical contexts involving long-range,vmany-body interactions. We consider a general class of such open systems characterized by a coupling to the environment which is inversely proportional to the square root of the environmentvsize. We demonstrate that the effect on the system dynamics has a universal bosonic character.vSpecifically, the same system dynamics is produced by the interaction with an environment of non-interacting bosonic modes, regardless of the microscopic details of the original environment. This emergent ‘bosonization’ of the environment’s influence results from the scaling of the coupling in
        the thermodynamic limit and is a manifestation of the quantum central limit theorem. While the effect has been observed in specific models before, we show that it is, in fact, a universal feature.

        Speaker: Marco Merkli
      • 352
        Quantum-like features in classical mechanics

        Hilbert space, superposition, uncertainty relations, and Wigner negativity, are terms usually associated with quantum phenomena and formalism. We show that all of these have analogues in classical mechanics, and attempt to explain their (in hindsight, unsurprising) meaning in a classical context. This is in an effort to characterize and identify the root of the differences between classical and quantum mechanics, and investigate the possibility of combining both theories into a single framework.

        Speaker: Mustafa Amin (University of Lethbridge)
      • 353
        Bridging the Gap between Classical and Quantum Mechanics through General Relativity.

        This paper proposes a conceptual framework to bridge the gap between classical and quantum mechanics using an emergent paradigm that positions general relativity into a probabilistic context. Starting from an analogy between Einstein equations and Bayes law, the linear case of a weak field static symmetric massive object is analyzed to point out how Einstein’s equation could incorporate a quantum mechanical concept, a weighting factor that considers the probability of the presence of a given energy-momentum density in a 4D space-time manifold. Using the Central Limit Theorem to model globally the very slow process of star formation and mathematically express the corresponding density function, the new framework provides a rationale for the emergence of a modified Newton’s law of gravitation, the classical inverse square function weighted by an exponential probabilistic factor. One key feature of this model is that it relies on the existence of an intrinsic physical constant , a star-specific proper length that scales all its surroundings and plays the role of a hidden variable to link classical and quantum mechanics. Incorporating the corresponding emergent erfc potential into a Schwarzchild spacetime metric, the constant component of the erfc potential makes the universal coordinate time smaller than the proper time of an observer at rest. On the one hand, the real roots of the binomial condition that makes the erfc metric identical to a Minkowskian one predicts the splitting of the speed of light into two components: the speed with respect to a fixed space-time and an apparent spacetime expansion. On the other hand, the imaginary roots predict the emergence of a complex spacetime where quantum phenomena arise, governed by the Schrodinger equation, providing, among other things, new interpretations to the wave particle duality and the photon entanglement.

        Speaker: Prof. Réjean Plamondon
      • 354
        An Isomorphism between Quantum Nonlocality and the Relativity of Colocality

        Understanding Bell nonlocality (EPR) in the context of relativistic spacetime is a longstanding problem in the foundations of physics. Here, we show that an observer-centric reinterpretation of the ontology of classical special relativistic spacetime (for an ensemble of observers) exhibits sufficiently close parallels to the phenomena of quantum non-locality that we can claim that the two sets of properties are isomorphic.

        While the physical contexts are different, the pattern of properties are identical. This isomorphism leads us to believe that observer disagreement within standard special relativity contains the seeds of quantum nonlocality.

        Colocality: This claim relies on the relativity of colocality [1], which simply describes how relatively moving observers disagree on which events occur at the same location. The relativity of colocality is the dual effect of the relativity of simultaneity and a direct consequence of the symmetry of the Lorentz boost. Disagreements on colocality can be interpreted as a many-space ontology, as discussed extensively elsewhere [2].

        Classical to Quantum: The essential difference between the classical and quantum cases is that while classical inertial observers can agree-to-disagree, this option is not available to a quantum system in a momentum superposition, because the disagreements become internal to the system. It no longer becomes possible to dismiss observer disagreement as mere coordinate choice; instead, actual spatial indeterminacy arises [3].

        Conclusions: Reinterpretation of the ontology of special relativity to respect observer disagreements leads to numerous quantum-like spacetime properties. Nonlocality is seen as a consequence of a many-spaces ontology.

        [1] J. C. Sharp. “Symmetry of the Lorentz boost: the relativity of colocality and Lorentz time contraction”. In: European Journal of Physics 37.5 (2016), p. 055606
        [2] J.C. Sharp. A Universe of Spaces, Amazon. 2024
        [3] J.C. Sharp The Relativistic Origin of Quantum Indeterminacy, CAP 2025.

        Speaker: Prof. Jonathan Sharp (University of Alberta)
      • 355
        Time-Symmetric Nonlinear Dynamics and the Emergence of Objective Wavefunction Collapse in Strongly Correlated t-U-V-J Manifolds

        In this study, we propose a unified theoretical framework for objective wavefunction collapse by situating non-linear quantum dynamics within a time-symmetric manifold. Utilizing the Highly Simplified Correlated Variational Approach (HSCVA), we map the t-U-V-J Hamiltonian of strongly correlated electronic systems onto a non-linear temporal landscape. We introduce the Extended Dirac Principle of Temporal Superposition, wherein the non-linearity of the potential induces a bifurcation of time-evolution trajectories into discrete "time-branches." To quantify the transition from quantum unitary evolution to classical state reduction, we derive a Unified Collapse Metric (UCM) which synthesizes soliton persistence, fidelity saturation and Aharonov-Bergmann-Lebowitz (ABL) symmetry variance. Our simulation of $YBa_2Cu_3O_{7-\delta}$ (YBCO) demonstrates that objective collapse is an emergent property of the system's own electronic interactions, specifically the U and J terms which act as an intrinsic observation mechanism. By integrating forward- and backward-evolving states through the ABL formalism, we show that the collapse event corresponds to the point where time-symmetry breaks, providing a dynamical origin for the macroscopic arrow of time.

        Speaker: Prof. Godfrey Ejiroghene Akpojotor (Delta State University, Abraka, Nigeria)
    • (DPP) W1-12 Fundamental plasmas theory & experiment | Théorie et expérimentation des plasmas fondamentaux (DPP)
      • 356
        From Micro-Scale Instabilities to Anomalous Macroscopic Transport: A High-Fidelity Kinetic Numerical Approach

        Electron and ion transport in non-equilibrium plasmas is inherently complex, exhibiting intricate behavior across both spatial and temporal scales. The Particle-In-Cell (PIC) method is one of the most widely used approaches in plasma physics, as it is the closest to first principles and requires very few input parameters, enabling simulations to self-consistently reach a steady state that closely reflects physical reality. It is well established that satisfying the PIC stability criteria — Δx<λD, Δt < min (0.2 x ωpe-1, Δx /vmax), NPPC sufficiently large — is mandatory to obtain stable and physically meaningful results, where Δx, λD, Δt, ωpe, vmax, and NPPC denote the mesh size, Debye length, time step, plasma frequency, velocity of the fastest macro-particle, and number of macro-particles per cell, respectively.
        However, when plasma transport is governed by turbulence or strong instabilities, merely satisfying these criteria at their margins proves insufficient. In this work, we demonstrate that sub-Debye length refinement of the spatial grid combined with high-order shape functions is essential to capture important physical effects and transport mechanisms that are systematically overlooked when first order shape functions with marginally fulfilled PIC criteria are used instead. This finding represents a paradigm shift in quantitative PIC modelling. To support this claim, we compare two typical cases: (i) electropositive, (ii) electronegative plasmas. For both situations, 1D and 2D simulations have been carried out. Remarkable differences occur in the charged particle transport and especially the micro instabilities. In the light of these results, the numerical approach used to simulate plasmas is crucial to capture the anomalous transport.

        Speaker: Prof. Loïc SCHIESKO (CEA)
      • 357
        Impact of electromagnetic turbulence and dissipation in magnetic reconnection

        Magnetic reconnection is a fundamental process that leads to rapid energy conversion in astrophysical, space and laboratory plasma systems. In modeling this process, most of the plasma fluid equations have employed an electrical resistivity to generate the magnetic dissipation required for magnetic reconnection to occur in a collisionless plasma. However, there has been no clear evidence that such a model is indeed appropriate in the reconnection diffusion region in terms of the kinetic physics. The present study demonstrates that, using a large-scale 3D kinetic simulation and analytical analysis, the spatial distribution of the non-ideal electric field is consistent with the dissipation due to the viscosity rather than the resistivity, when electromagnetic (EM) turbulence is dominant in the electron diffusion region (EDR). The effective viscosity is caused by the EM turbulence that is driven by the flow shear instabilities leading to the electron momentum transport across the EDR [1]. The result suggests a fundamental modification of the fluid equations using the resistivity in the Ohm’s law. In contrast, for the 2D current sheet without significant turbulence activity, the non-ideal field profile does not obey the simple form based on the viscosity. A general form of the of non-ideal electric field in 2D and 3D current sheets appropriate for fluid simulations is presented. The second part of the talk will be concerned with the time domain structures (TDS) associated with free energy sources (beams, thermal anisotropy, etc) that arise from the energy conversion process in magnetic reconnection and magnetic flux rope interaction. The results of both anomalous transport and TDS studies are connected to recent satellite observations (Magnetospheric Multiscale Satellites - MMS) [2] and laboratory magnetized plasma experiments.
        [1] K. Fujimoto and R.D. Sydora, “The electron diffusion region dominated by electromagnetic turbulence in the reconnection current layer”, Phys. Plasmas, 30 (2), (2023), 022106 (10 pages).
        [2] Z.H. Zhong, et al, “Electromagnetic viscosity supported anomalous electric field in the electron diffusion region of collisionless magnetic reconnection“, Nature Comm., 16, (2025), 10519 (11 pages).

        Speaker: Richard Sydora (University of Alberta)
      • 358
        Experimental and numerical investigation of nanosecond discharges in air gap with hemispherical dielectric layer

        Electrical discharges are transient phenomena highly sensitive to the geometric and dielectric properties of their surrounding environment. Such a dependence directly influences plasma-surface interactions and the efficiency of a related application. This study investigates experimentally and by simulation the propagation dynamics of nanosecond discharge in air gap with the presence of a hemispherical dielectric layer (HDL). Experimentally, the HDL was fabricated using 3D printing of opaque resin that has a dielectric permittivity εr = 3. Its diameter and thickness can be finely adjusted during printing. Herein, we investigate the influence of HDL’s thickness (100-1000 µm) on the dynamics of the discharge generated inside and outside of the HDL using an ICCD camera. To further determine the spatial and temporal properties of the discharge, numerical simulations are performed by adapting a 2D fluid model [1]. The simulation predicts the spatial evolution of electrons density and electric field inside and outside of the HDL. After being validated for the experimental cases, the model is employed to investigate the role of εr (3, 40, 80) on the discharge properties. The results offer new insights for understanding plasma-surface interactions which is highly needed for many applications.

        Speaker: Mahdi Miloud Mohamed Boussa (Université de Montréal)
      • 359
        Effect of the chain of multiple water droplets on the propagation of a nanosecond pulsed plasma discharge at atmospheric pressure

        Plasma–liquid interactions have attracted growing interest in both academia and industry [1] due to their relevance to chemical synthesis [2], microbial inactivation [3], and environmental remediation [4,5]. Discharges at atmospheric pressure are highly sensitive to nearby liquid structures, which can reshape their formation and propagation. In this work, we investigate how a chain of millimetre-scale water droplets influences the initiation, structure, and propagation behaviour of a nanosecond pulsed discharge in air.
        A pin-to-pin electrode geometry is used, above a glass substrate. Chain of water droplets with controlled spacing and volume is placed on the substrate between the electrodes. The plasma is driven by 50–400 ns voltage pulses at peak amplitudes of 14–20 kV. Time-resolved current and voltage waveforms are recorded, and ICCD imaging combined with optical emission spectroscopy provides nanosecond-scale insight into streamer development and plasma–liquid interaction regions.
        The introduction of a single droplet already modifies the discharge path compared with propagation in dry air, consistent with previous observations [6]. Increasing the number of droplets further alters breakdown timing, peak current, and filament topology. Enhanced local electric fields near droplets promote secondary streamer inception and lead to stronger OH and N₂ emission in the droplet vicinity, indicating intensified energy deposition and plasma–liquid coupling. This study examines whether these effects persist or evolve when multiple droplets are arranged in a chain.

        Speaker: Ms Valentina Riazantceva (Institut national de la recherche scientifique)
      • 360
        DRAGONS: New Insights into Galactic Magnetism

        Magnetic fields are an essential component for life. Without magnetic fields, the Earth would be irradiated by the Solar wind, and the Galaxy would collapse under gravitational pressure. While we have known for ~2000 years that the Earth has a magnetic field, the idea of a Galactic magnetic field (GMF) was first proposed less than 90 years ago. Our work has been focused on understanding what the GMF looks like today, which will provide critical constraints in understanding how the GMF originally formed and how it is evolving.

        In the late 1970s and early 1980s, a magnetic shear (reversal) was identified between the local spiral arm and the next inner arm of our Galaxy (Sagittarius arm). Identifying additional reversals became the objective for many GMF studies since the number and location of these reversals could help determine the likely galactic dynamo mode(s) at work in the Milky Way. The new DRAGONS radio polarization data, observed at the Dominion Radio Astrophysical Observatory in Penticton, British Columbia, are providing new clues about the GMF within 5000 lightyears of the Sun. I will share these latest results, which suggest a remarkable similarity to the Solar magnetic field.

        Speaker: Jo-Anne Brown
    • W1-2 Big Data in Matter, Materials, and Beyond | Le Big Data dans la matière, les matériaux et au-delà

      Physics has been an empirical science for centuries, but contemporary data exist in quantities that dwarf Newton’s Principia. Today, marshalling data to distill the understanding to describe emergent behaviours, invent new materials, and develop new solutions for improving technologies and human health requires fundamentally different approaches. This symposium will draw together physicists working across disciplinary borders to forge new approaches to empirical science in the 21st century.

      Planned speakers include expertise in machine learning and artificial intelligence, information geometry, big data, and applications in materials, condensed matter, biophysics, and the physics of medicine. In addition, speakers include Canadians currently abroad, foreign physicists near (ish) to Ottawa or with research connections to Canada, and Canadian physicists at diverse career stages. We expect contributed talks and participation from students in condensed matter and beyond with academic and industry interests in machine learning, data science, and applied physics.

      La physique est une science empirique depuis des siècles, mais les données contemporaines existent en quantités qui éclipsent les Principia de Newton. Aujourd'hui, rassembler des données pour distiller la compréhension afin de décrire les comportements émergents, inventer de nouveaux matériaux et développer de nouvelles solutions pour améliorer les technologies et la santé humaine nécessite des approches fondamentalement différentes. Ce symposium réunira des physiciens travaillant au-delà des frontières disciplinaires afin de forger de nouvelles approches de la science empirique au XXIe siècle.

      Parmi les intervenants prévus figurent des experts en apprentissage automatique et en intelligence artificielle, en géométrie de l'information, en mégadonnées et en applications dans les domaines des matériaux, de la matière condensée, de la biophysique et de la physique médicale. En outre, les intervenants comprennent des Canadiens actuellement à l'étranger, des physiciens étrangers proches (ou presque) d'Ottawa ou ayant des liens de recherche avec le Canada, ainsi que des physiciens canadiens à différentes étapes de leur carrière. Nous attendons des contributions et la participation d'étudiants en matière condensée et au-delà, qui s'intéressent à l'apprentissage automatique, à la science des données et à la physique appliquée dans le milieu universitaire et industriel.

    • (DCMMP) W1-3 | (DPMCM)

      The field of quantum science and technology, focusing specifically on quantum simulation, deals with the development of novel quantum systems and hardware to realize new approaches to understanding and controlling complex quantum many-body systems on different time and energy scales.

      The Programmable Quantum Simulators Based on 2-Dimensional (2D) Materials initiative, supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), is a national-scale research effort uniting three Canadian quantum hubs. The collaboration spans eight research groups across six universities and includes three industry partners. The project is structured around three core goals: design and construct a specialized quantum simulator capable of emulating the complex behavior of quantum systems; implement programmable quantum devices leveraging 2D material-based hardware platforms; and build large-area high-quality 2D materials and theoretical models to enhance the development of quantum devices and their fabrication.

      This initiative targets fundamental challenges in quantum phases of matter, including correlated insulators, superconductors, Wigner crystals, topological phases, spin liquids and magnetism. Beyond fundamental science, it holds promise for technological breakthroughs, such as unraveling mechanisms behind high-temperature superconductivity or optimizing the performance of quantum materials (semiconductors, magnets, ferroelectrics, and topological materials) for applications in electronics and optoelectronics. The symposium will bring together Canadian physicists to present the initiative recent advances and ongoing challenges, highlight potential industrial and academic impacts and foster new collaborations within the Canadian quantum research ecosystem.

      Le domaine de la science et de la technologie quantiques, qui se concentre spécifiquement sur la simulation quantique, traite du développement de nouveaux systèmes et matériels quantiques afin de mettre en œuvre de nouvelles approches pour comprendre et contrôler des systèmes quantiques complexes à plusieurs corps sur différentes échelles de temps et d'énergie.

      L'initiative « Simulateurs quantiques programmables basés sur des matériaux bidimensionnels (2D) », soutenue par le Conseil de recherches en sciences naturelles et en génie du Canada (CRSNG), est un effort de recherche à l'échelle nationale qui réunit trois pôles quantiques canadiens. La collaboration s'étend à huit groupes de recherche dans six universités et comprend trois partenaires industriels. Le projet s'articule autour de trois objectifs principaux : concevoir et construire un simulateur quantique spécialisé capable d'émuler le comportement complexe des systèmes quantiques ; mettre en œuvre des dispositifs quantiques programmables utilisant des plateformes matérielles basées sur des matériaux 2D ; et construire des matériaux 2D de grande surface et de haute qualité ainsi que des modèles théoriques afin d'améliorer le développement et la fabrication des dispositifs quantiques.

      Cette initiative cible les défis fondamentaux liés aux phases quantiques de la matière, notamment les isolants corrélés, les supraconducteurs, les cristaux de Wigner, les phases topologiques, les liquides de spin et le magnétisme. Au-delà de la science fondamentale, elle est prometteuse pour des percées technologiques, telles que la découverte des mécanismes à l'origine de la supraconductivité à haute température ou l'optimisation des performances des matériaux quantiques (semi-conducteurs, aimants, ferroélectriques et matériaux topologiques) pour des applications en électronique et en optoélectronique. Le symposium réunira des physiciens canadiens qui présenteront les progrès récents et les défis actuels de l'initiative, mettront en évidence les impacts potentiels sur l'industrie et le monde universitaire et favoriseront de nouvelles collaborations au sein de l'écosystème canadien de recherche quantique.

    • W1-4 Novel Imaging of the Retina of the Eye | Nouvelle imagerie de la rétine de l'œil

      This special Symposium is to recognize the impactful research and training undertaken by Prof. Kostadinka Bizheva, of the Department of Physics and Astronomy, University of Waterloo. Prof. Bizheva recently passed away, at the height of her career. Prof. Bizheva’s research has had and will continue to have a substantial impact on the development of optical coherence tomography for high resolution imaging of the retina and the diagnosis and tracking of eye diseases. In addition, Prof. Bizheva had a tremendous impact on her many graduate students, to whom she provided unwavering support. Invited speakers will present the novel methods and the resulting impacts on the diagnosis and treatment of eye and other diseases. The Symposium will allow researchers, postdoctoral fellows, and graduate students to gather in Prof. Bizheva memory while celebrating the use of novel physics techniques for diagnostic imaging, including those advanced by her research.

      Ce symposium spécial vise à reconnaître l'importance des travaux de recherche et de formation menés par la professeure Kostadinka Bizheva, du département de physique et d'astronomie de l'Université de Waterloo. La professeure Bizheva est récemment décédée, au sommet de sa carrière. Ses travaux de recherche ont eu et continueront d'avoir un impact considérable sur le développement de la tomographie par cohérence optique pour l'imagerie haute résolution de la rétine et le diagnostic et le suivi des maladies oculaires. De plus, le professeur Bizheva a eu une influence considérable sur ses nombreux étudiants diplômés, auxquels elle a apporté un soutien indéfectible. Les conférenciers invités présenteront les nouvelles méthodes et leurs répercussions sur le diagnostic et le traitement des maladies oculaires et autres. Le symposium permettra aux chercheurs, aux Fellows postdoctoraux et aux étudiants diplômés de se réunir en mémoire du professeur Bizheva tout en célébrant l'utilisation de nouvelles techniques physiques pour l'imagerie diagnostique, notamment celles mises au point grâce à ses recherches.

      • 361
        Advancing Retinal Imaging with Optical Coherence Tomography: From Morphology to Neurovascular Coupling

        The retina is a highly metabolically active neural tissue in which the neuronal and vascular components work together to meet the high metabolic demands through a phenomenon known as neurovascular coupling. Neurodegenerative diseases such as glaucoma, age-related macular degeneration, retinitis pigmentosa have been linked with disruption of this phenomenon. Understanding these alterations at a cellular level requires imaging tools capable of resolving both structure and function in vivo.

        Over the past few decades, Optical Coherence Tomography (OCT) has transformed retinal imaging by enabling non-invasive, depth-resolved imaging of the retina. Early applications focused on structural characterization of retinal layers and quantitative assessment of pathological changes in retinal neurodegenerative diseases. These advances established OCT as an indispensable clinical and research tool.

        Building on this structural foundation, more recent developments have extended OCT beyond morphology toward functional imaging. Doppler OCT enabled quantitative assessment of retinal blood flow and provided new insights into vascular regulation. More recently, optoretinography (ORG) has emerged as a powerful approach for detecting stimulus-evoked intrinsic optical signals associated with neuronal activation, allowing direct, non-invasive measurement of retinal function in vivo.

        This talk will trace the progression of retinal imaging from structural characterization to the investigation of retinal dynamics, highlighting advances in OCT-based techniques that bridge morphology, vascular physiology, and neuronal activity. Particular emphasis will be placed on recent work examining stimulus-evoked retinal blood flow changes and their relationship to neurovascular coupling, demonstrating how modern OCT approaches can probe functional hyperemia in the living human retina.

        Together, these advances illustrate the continuing evolution of OCT from structural imaging to a powerful platform for probing retinal physiology in vivo.

        Speaker: Khushmeet Kaur Dhaliwal (University of Waterloo)
      • 362
        In vivo cellular-scale imaging of fluorophores in the living retina

        Many cells in the retina can contain metabolic and visual cycle molecules with intrinsic fluorescent properties. As a result, adaptive optics single- and two‑photon excited fluorescence ophthalmoscopy offer powerful approaches for examining in vivo the health and function of individual retinal cells. By analyzing both the fluctuations in fluorescence intensity and their timing characteristics, these modalities can provide insight into cellular physiology. Differences observed in time‑resolved fluorescence images reflect the diverse fluorescent sources present across various retinal cell types.

        In non‑human primates, two‑photon excited fluorescence signals change in response to visual stimulation and show distinct alterations in models of retinal degeneration and during systemic hypoxia. In human participants, single-photon fluorescence enables visualization of cellular mosaics and detection of molecular variations. Furthermore, in preclinical studies, the introduction of extrinsic fluorophores into cells allows these molecules to serve as optical reporters of retinal function.

        Speaker: Jennifer Hunter (University of Waterloo)
      • 363
        Oz Vision: An Advanced Display Platform for Basic and Clinical Vision Science

        Oz Vision is a new principle for visual display in which individual photoreceptors are directly stimulated to generate visual experience. An Oz Vision display bypasses constraints such as image formation by the eye’s optics and even the spectral sensitivities of the photoreceptors. To implement this principle, we use an adaptive optics scanning light ophthalmoscope (AOSLO), which corrects optical aberrations and enables precise delivery of light to targeted cone photoreceptors. The scanning architecture allows high-speed retinal imaging and tracking, as well as the delivery of carefully controlled microflashes to each cone across the region of the retina encompassed by the display. The display is small -- about twice the width of the full moon -- but can contain over 5,000 cones. This system enables new ways to study visual perception in both healthy and diseased eyes. We demonstrate two applications of Oz Vision: first, the generation of color sensations that lie outside the normal human color gamut; and second, the ability to test the perceptual consequences of cone loss—such as in retinal degenerative disease—by emulating specific patterns of photoreceptor loss in otherwise healthy observers.

        Speaker: Prof. Austin Roorda (University of Waterloo)
      • 364
        In vivo imaging of retinal protein deposits in human retina as biomarkers of Alzheimer’s Disease and other neurodegenerative diseases

        Introduction: We have previously demonstrated ex vivo, dye-free imaging of retinal protein deposits in the human retina which predict the presence of deposits in the brain of: 1) amyloid beta in association with Alzheimer’s disease (AD), 2) alpha synuclein in association with Dementia with Lewy Bodies (DLB) and Multiple System Atrophy (MSA) and 3) TDP-43 in association with ALS and FTLD-TDP. Postmortem, the number of deposits found in retinas with an associated brain pathology of AD predicted severity of the disease. In AD, retinal and brain deposits are found years before diagnosis. Here, we use polarized light to image protein deposits in the retina of the living eye.

        Methods: A custom module was added to the front of a commercial retinal imaging system, which produced circularly polarized light incident on the eye. Light reflected from the retina exited the eye, was incident on the quarter wave plate followed by a linear polarizer and was recorded at 30 frames/sec. The individuals imaged included 1 with a diagnosis of AD, 1 over 65 with a family history of AD, 1 diagnosed with age related macular degeneration (AMD) and 1 who had no history of AD or AMD.

        Results: In the individuals with either a family history of the disease or a diagnosis of AD, deposits, consistent with Alzheimer’s disease (and similar to those imaged ex vivo), were imaged in the anterior (neural) layers of the retina. One had numerous deposits and one had sparse deposits consistent with early Alzheimer’s disease pathology. Three individuals had deposits located in the more posterior layers of the retina, consistent with AMD.

        Conclusion: This Imaging method would be the only dye-free, non-invasive, inexpensive and widely available diagnostic of pathology due to Alzheimer’s and other neurodegenerative diseases. It could facilitate early interventions known to slow diseases progression.

        Speaker: Lyndsy Acheson (University of Waterloo)
    • W1-5 Private Sector Physics | Physique du secteur privé

      Over 75% of physics graduates work in the private sector. Young physicists, or those interested in learning about physics career paths outside academia, are encouraged to attend this interactive symposium, which will provide insights into the careers of private sector physicists and offer insights and advice into the possible pathways and training needed to transition your physics training into an engaging and rewarding career beyond academia. Included in the symposium day schedule is an interactive Panel Session, hosted by the Director of Private Sector Physics, where you can learn more about the people and their careers as private sector physicists.

      Plus de 75 % des diplômés en physique travaillent dans le secteur privé. Les jeunes physiciens, ou ceux qui souhaitent en savoir plus sur les carrières en physique en dehors du milieu universitaire, sont encouragés à participer à ce symposium interactif, qui leur permettra de découvrir les carrières des physiciens du secteur privé et leur fournira des informations et des conseils sur les parcours possibles et la formation nécessaire pour transformer leur formation en physique en une carrière passionnante et enrichissante en dehors du milieu universitaire. Le programme de la journée comprend une table ronde interactive, animée par le directeur de Private Sector Physics, au cours de laquelle vous pourrez en apprendre davantage sur les personnes et leurs carrières en tant que physiciens du secteur privé.

    • W1-6 Symposium on Fusion Energy in Canada | Symposium sur l'énergie de fusion au Canada

      The needs for enhancing global energy security and meeting rising energy demands are widely recognized including those driven by electrification, digitalization, and artificial intelligence. Canada is the only G7 nation without any federal program and national strategy for fusion energy research. We propose a public session/symposium on Fusion Energy at the CAP 2026. Investment in fusion energy is advancing worldwide at a rapid pace; here we hope to highlight the unique opportunity for Canada to join at the ground level and lead advancements in fusion energy. Growth in fusion energy not only supports energy independence, but also national aims in research, technological diversification, defense, and national technological sovereignty. This meeting in the nation’s capital of Ottawa is a unique opportunity to showcase the potential of fusion energy, identify and highlight Canadian expertise in academia and on-going work in associated industries in Canada.

      La nécessité de renforcer la sécurité énergétique mondiale et de répondre à la demande croissante en énergie est largement reconnue, notamment en raison de l'électrification, de la numérisation et de l'intelligence artificielle. Le Canada est le seul pays du G7 à ne disposer d'aucun programme fédéral ni d'aucune stratégie nationale en matière de recherche sur l'énergie de fusion. Nous proposons une session publique/un symposium sur l'énergie de fusion lors de l'ACP 2026. Les investissements dans l'énergie de fusion progressent rapidement à l'échelle mondiale ; nous espérons ici mettre en évidence l'opportunité unique pour le Canada de se joindre à ce mouvement dès le début et de mener les avancées dans le domaine de l'énergie de fusion. La croissance de l'énergie de fusion favorise non seulement l'indépendance énergétique, mais aussi les objectifs nationaux en matière de recherche, de diversification technologique, de défense et de souveraineté technologique nationale. Cette réunion dans la capitale nationale, Ottawa, est une occasion unique de mettre en valeur le potentiel de l'énergie de fusion, d'identifier et de souligner l'expertise canadienne dans le milieu universitaire et les travaux en cours dans les industries connexes au Canada.

    • W1-7 Q-STATE: Quantum Science, Technology, Applications, Training, and Education | Q-STATE : Science, technologie, applications, formation et éducation quantiques

      Q-STATE, offered through the Division of Quantum Information (DQI) in collaboration with the Division of Physics Education (DPE) and the Private Sector Relations Committee, is an event by and for the CAP community to learn about current directions in the field and the unique Canadian quantum landscape. This year’s symposium will offer sessions on the following topics:
      Quantum Information in Canadian Politics (Second National Strategy)
      Quantum and AI – hype, reality and nonsense
      Recent developments in the Canadian Quantum Industry and at Canadian Quantum Hubs
      Q-STATE, proposé par la Division de l'information quantique (DQI) en collaboration avec la Division de l'enseignement de la physique (DEP) et le Comité des relations avec le secteur privé, est un événement organisé par et pour la communauté de l'ACP afin de découvrir les orientations actuelles dans ce domaine et le paysage quantique unique du Canada. Le symposium de cette année proposera des sessions sur les thèmes suivants :
      L'information quantique dans la politique canadienne (deuxième stratégie nationale)
      Quantique et IA – battage médiatique, réalité et absurdités
      Développements récents dans l'industrie quantique canadienne et dans les pôles quantiques canadiens

    • W1-8 Artificial Intelligence in Radiotherapy | L'intelligence artificielle en radiothérapie

      Artificial Intelligence in Radiotherapy explores how advanced computational methods are transforming radiation oncology across the full clinical workflow. From automated image segmentation and adaptive treatment planning to outcome prediction, response assessment, and biologically informed dose optimization, AI-driven approaches are reshaping how radiotherapy is designed, delivered, and evaluated. This symposium will highlight methodological advances, clinical translation, and emerging challenges in validation, robustness, and implementation. By bringing together expertise spanning medical physics, imaging science, machine learning, and clinical oncology, the session aims to provide a comprehensive perspective on how AI is advancing precision, efficiency, and personalization in radiotherapy.

    • (DNP) W1-9 | (DNP) W1-9: Fundamental Symmetries
      • 365
        The search for highly-forbidden nuclear decays.

        The measurement of nuclear physics parameters has recently become increasingly important for the fields of geochronology, nuclear physics, and particle astrophysics. This is especially true for highly-forbidden decays, which typically can have a very long, greater than one billion years, half-life. Experimental validation of these decays can allow us to understand their complex nuclear-structure effects, any quenching of the weak axial-vector coupling, their prominence as a background for rare-event searches and their relevance for 0νββ experiments. However, although the long half-life makes these isotopes interesting, it also makes them very challenging to measure. This presentation will detail the motivation behind measuring these isotopes as well as go over recent experimental efforts to measure these decays, including: the RadioActive isotope Measurement Program at SNOLAB (RAMPS), KDK+ and Lutetium sCintillation Experiment (LUCE).

        Speaker: Dr Matthew Stukel (SNOLAB)
      • 366
        Development and testing of the TUCAN superfluid He-II based ultra cold neutron source at TRIUMF

        The TRIUMF Ultra Cold Advanced Neutron (TUCAN) Collaboration has built and tested an ultracold neutron (UCN) source based on superthermal downscattering in isopure superfluid Helium-4. During recent commissioning beamtimes the collaboration demonstrated a record storage density in a prototype EDM storage volume. This new UCN source will serve fundamental physics experiments, including an experiment to search for the electric dipole moment of the neutron (nEDM).

        The TRIUMF main cyclotron produces 483 MeV protons, which are directed into a tungsten spallation target. The resulting spallation neutrons are moderated first by room-temperature D2O, then by liquid deuterium (LD2) at 25 K before they reach the production volume, which is filled with $\approx$~28 liquid litres of isopure He4 at nearly 1 K. At this temperature, the superfluid component of the helium (He-II) has a phonon excitation mode that is close in energy to the peak thermal energy of the LD2-moderated neutrons. As the (now) cold neutrons excite this phonon mode in the He-II, they lose almost all their kinetic energy, becoming UCN with kinetic energy $<$ 300 neV. The UCN can then be directed to experimental apparatus via specially coated guides as one would direct a gas.

        This presentation will go over the latest results and current status of both the TUCAN source and the nEDM experiment.

        Speaker: Mr Wolfgang Klassen (University of British Columbia)
      • 367
        Magnetic Field Characterisation for Gravitational Free Fall Measurements of Antihydrogen in the ALPHA-g Experiment

        The comparison of matter and antimatter provides one of the most sensitive tests of fundamental physics, yet until recently the gravitational behavior of antimatter had never been directly observed. For charged antimatter particles, gravitational forces are dwarfed by electromagnetic effects, making such measurements impractical. Antihydrogen—the electrically neutral bound state of an antiproton and a positron—overcomes this limitation and can be routinely produced, trapped, and studied by the ALPHA collaboration at CERN.
        While ALPHA was originally designed for precision laser spectroscopy of antihydrogen, the dedicated vertical apparatus ALPHA-g was commissioned to enable measurements of gravitational effects. In this talk, I will present the first experimental observation of the influence of gravity on antihydrogen atoms [1], together with a detailed discussion of systematic studies of the magnetic field based on electron-cyclotron-resonance (ECR) magnetometry. These measurements are essential for controlling and characterizing the dominant non-gravitational forces acting on trapped antihydrogen.
        I will conclude with an update on the current performance of ALPHA-g and an outlook on future experimental programs aimed at improving the precision of antimatter gravity measurements.

        1. E. K. Anderson et al., “Observation of the effect of gravity on the motion of antimatter,” Nature 621, 716–722 (2023).
        Speaker: Adam Powell (CERN)
    • 12:00
      BSPC Judges Meeting | Réunion des juges de la CMPE
    • 12:00
      Break for Lunch (12h00-13h30) | Pause pour dîner (12h00-13h30)
    • CAP/NSERC Liaison Meeting | Réunion du comité de liaison ACP/CRSNG
    • W-MEDAL1 CAP-CRM Prize Talk | Conférence du lauréat du Prix de l'ACP-CRM
    • 14:00
      Travel Time | Déplacement
    • W2-1 Future Particle Physics Energy Frontier Facilities | Installations futures à la pointe de la physique des particules
    • W2-2 Big Data in Matter, Materials, and Beyond | Le Big Data dans la matière, les matériaux et au-delà
      • 14:15
        Spacecraft Operations - Andrew Howarth
      • 14:45
        Space Weather and the Power Grid - Hannah Parry
      • 15:15
        Radiation and Spacecraft - Ian Mann
    • W2-3 Advancing Quantum Simulation based on 2 Dimensional Materials Through Canadian Collaboration | Faire progresser la simulation quantique basée sur les matériaux bidimensionnels grâce à une collaboration canadienne
    • W2-4 Artificial Intelligence in Radiotherapy | L'intelligence artificielle en radiothérapie
      • 368
        Integrating Artificial Intelligence into Head and Neck Radiotherapy: Early Steps Toward Response-Adaptive Treatment

        As cure rates improve for HPV-associated oropharyngeal cancer, attention has shifted toward reducing the burden of treatment toxicity without compromising disease control. Artificial intelligence and advanced imaging may offer tools to support this goal, though their clinical readiness remains an open question. This talk describes an ongoing research program exploring AI applications across three areas: longitudinal imaging biomarkers for tumour response monitoring during radiotherapy, prediction of treatment outcomes in surgically managed head and neck cancer using multi-omic data, and early detection of treatment-related complications. These projects are being developed in parallel with OPTIMA-OPC, a prospective de-escalation trial in HPV-positive oropharyngeal cancer, with the longer-term goal of embedding validated biomarkers into adaptive treatment protocols. We share preliminary findings, current limitations, and the practical challenges of translating AI tools from the research setting into clinical trials.

        Speaker: Houda Bahig (Centre hospitalier de l'Université de Montréal (CHUM))
    • W2-5 Private Sector Physics | Physique du secteur privé
    • W2-6 Symposium on Fusion Energy in Canada | Symposium sur l'énergie de fusion au Canada
    • W2-7 Q-STATE: Quantum Science, Technology, Applications, Training, and Education | Q-STATE : Science, technologie, applications, formation et éducation quantiques
    • W2-8 Transforming physics: EDIT-STEM tools to advance equity, diversity, and inclusion | Transformer la physique : les outils EDIT-STEM pour promouvoir l'équité, la diversité et l'inclusion
    • (DAMOPC) W2-9 Precision single particle and many-body physics in AMO systems | Physique de précision des particules individuelles et des systèmes à plusieurs corps dans les systèmes AMO
      • 369
        Recent Progress in Rydberg Atom-Based Radio Frequency Sensors

        We will describe the principals of Rydberg atom-based radio frequency (RF) sensors. The introduction will be followed by a descriptions of recent advances that have been made at Quantum Valley Ideas Laboratories. We will describe experiments where vapor cells are engineered for low background electric fields and enhancement of the RF field. These improvements lead to better sensitivity and reproducibility. Finally, we will describe a new readout approach based on a closed optical loop in the atom that can enable simultaneous amplitude and phase readout, so-called IQ detection.

        Speaker: James Shaffer (Quantum Valley Ideas Laboratories)
      • 370
        It’s Been a Hot Second: Atomic clocks, timescales, and the redefinition of the SI second

        How do we know what time it is—and how can we ensure that a second measured today will be the same tomorrow, or on the other side of the world? Modern timekeeping answers these questions by tying time to the fundamental properties of atoms, achieving a level of precision that underpins both advanced technologies and fundamental physics.
        In this talk, I will describe how atomic clocks are used to realize official time in Canada and the role of the National Research Council’s Frequency and Time Group in generating and maintaining the national timescale. I will outline how national timescales are linked through international comparisons to form International Atomic Time (TAI), and how successive generations of atomic clocks have steadily improved accuracy and stability.
        Finally, I will review current international efforts toward a redefinition of the SI second. With optical atomic clocks now surpassing caesium-based standards by more than two orders of magnitude, this redefinition represents a significant step in the evolution of time measurement, with important implications for metrology and emerging technologies.

        Speaker: Scott Beattie
      • 371
        Building an integrated telecom-wavelength quantum platform with artificial atoms

        Scaling up light-based quantum devices for communications requires a platform capable of creating and processing telecommunication wavelength quantum light states. In this talk, I will introduce a quantum dot-based platform that our group has been developing together with NRC. The quantum dots, which act like artificial semiconductor atoms, are grown in a unique way; I will show how, already at this early stage in their development, they possess excellent properties that can potentially power on-demand quantum technologies. I will show how we interface these emitters with integrated photonic circuits, and conclude by discussing the next steps in the development of this Canadian quantum platform.

        Speaker: Nir Rotenberg (Queen's University)
    • 15:45
      Health Break with Exhibitors | Pause santé avec les exposants
    • W3-1 Future Particle Physics Energy Frontier Facilities | Installations futures à la pointe de la physique des particules
    • W3-2 Big Data in Matter, Materials, and Beyond | Le Big Data dans la matière, les matériaux et au-delà
      • 16:15
        Space Weather and Aviation - Robyn Fiori
      • 16:45
        Spacecraft Drag - Daniel Billett
      • 17:15
        Space Weather Impacts on Communications, Remote Sensing, and Navigation - David Themens
    • W3-3 Advancing Quantum Simulation based on 2 Dimensional Materials Through Canadian Collaboration | Faire progresser la simulation quantique basée sur les matériaux bidimensionnels grâce à une collaboration canadienne
    • W3-4 Artificial Intelligence in Radiotherapy | L'intelligence artificielle en radiothérapie
    • W3-5 Private Sector Physics | Physique du secteur privé
    • W3-6 Symposium on Fusion Energy in Canada | Symposium sur l'énergie de fusion au Canada
    • W3-7 Q-STATE: Quantum Science, Technology, Applications, Training, and Education | Q-STATE : Science, technologie, applications, formation et éducation quantiques
    • (DAMOPC) W3-8 Precision single particle and many-body physics in AMO systems | Physique de précision des particules individuelles et des systèmes à plusieurs corps dans les systèmes AMO
      • 372
        Techniques for Precision Metrology and the Realization of Quantum Sensors*

        We review distinctive experimental techniques that rely on coherent scattering, precision metrology, and atom interferometry that have realized varied applications including precise measurements of atomic lifetimes, masses of dielectric particles, atomic diffusion, centre of mass velocity, and gravitational acceleration. We show that the two-pulse photon echo technique is capable of realizing the most precise determination of the Rb 5P3/2 excited state lifetime. We describe time domain techniques that track the motion of dielectric microparticles confined by free space optical tweezers and measure particle masses with a sensitivity of 10-16 kg. We detect the motion of Rb optical lattices in a buffer gas environment to obtain the most comprehensive measurements of atomic diffusion that can serve as the basis for a quantum pressure sensor capable of calibrating commercial pressure gauges. We outline a new generation of frequency domain and time domain techniques for the realization of state-of-the-art velocimeters that utilize laser cooled Rb atoms. Finally, we review recent results from a new generation of frequency domain echo atom interferometers that use ultracold Rb atoms channelled into an optical lattice to realize a gravimeter. A universal theme in all these experiments is the reliance on low cost, homebuilt, laser systems developed through industrial partnerships.
        *Work supported by CFI, OIT, NSERC, OCE, The Helen Freedhoff Memorial Fund and York University

        Speaker: Prof. A Kumarakrishnan (York University)
      • 373
        Quantum jump approach to atom interferometers

        Atom interferometers are a form of quantum sensor in which matter waves are used for high-precision inertial sensing, such as gravimetry and gradiometry. Optimizing these sensors involves a careful design of the interferometer geometry, as well as improving the detection scheme that monitors internal and center-of-mass states of an atomic cloud.

        In this talk we present a theoretical analysis of an advanced experimental detection scheme for the D2-line of 87 Rb atoms, which consists of a sequence of laser beams. The interaction between the atoms and beams is described using a master equation, which is solved numerically using a quantum jump approach. Each quantum jump takes into account the interplay between hyperfine state and the momentum kick that an atom receives when emitting a photon. We present simulations of the detection scheme and discuss how it can impact both the sensitivity and accuracy of existing atom interferometers.

        Speaker: Karl-Peter Marzlin
      • 374
        Entanglement, loss, and quantumness: When balanced beam splitters are best

        Entanglement generation by beam splitters lies at the heart of quantum optics. Yet, the conjecture that maximal entanglement is generated by beam splitters with equal probabilities of reflection and transmission has remained unproved for two decades. I will show how we proved this conjecture by studying photon loss and found corollaries throughout quantum optics.

        Speaker: Aaron Goldberg (National Research Council of Canada)
    • Best Student Poster Competition Finals Judging (Closed to delegates) | Jugement des finales de la compétition d'affiches étudiantes (session fermée) Rm 263 (cap.128) (Arts Bldg., U.Sask.)

      Rm 263 (cap.128)

      Arts Bldg., U.Sask.

      Convener: Wendy Taylor (York University (CA))
    • W-MIX Private Sector and Quantum Industry Meet and Greet Mixer Event | Rencontre et accueil avec le secteur privé et l'industrie quantique University Club, U.Sask.

      University Club, U.Sask.

      Everyone is invited - students, speakers, and audiences in both the Private Sector and Quantum Q-STATE Industry symposia.

      Stay tuned for information on the venue and the exact time.

    • Best Student Poster Competition Judges Meeting | Réunion des juges du concours de la meilleure affiche étudiant(e) Rm 263 (cap.128) (Arts Bldg., U.Sask.)

      Rm 263 (cap.128)

      Arts Bldg., U.Sask.

      Convener: Wendy Taylor (York University (CA))
    • 08:25
      Congress Registration and Information (08h30-16h00) | Inscription au Congrès et information (08h30-16h00) Ramp area (Arts Bldg., U.Sask.)

      Ramp area

      Arts Bldg., U.Sask.

    • 08:45
      Plenary hall opens | Ouverture de la salle plénière Rm 1150 (cap.505) (Health Sciences Bldg., U.Sask.)

      Rm 1150 (cap.505)

      Health Sciences Bldg., U.Sask.

    • R-PLEN1 Plenary Session | Session plénière - Saniya Heeba, Carleton University
    • R-STUD-COMP1 CAP Best Student Oral Presentations Final Competition ( talks 1-3) | Compétition finale de l'ACP pour les meilleures communications orales d'étudiant(e)s
      • 09:45
        Health Break with Exhibitors | Pause santé avec les exposants
    • 10:30
      Health Break with Exhibitors | Pause santé avec les exposants
    • R-STUD-COMP2 CAP Best Student Oral Presentations Final Competition (talks 4-8) | Compétition finale de l'ACP pour les meilleures communications orales d'étudiant(e)s
    • 12:00
      Break for Lunch (12h00-13h30) | Pause pour dîner (12h00-13h30)
    • Best Student Oral Competition Judges Meeting | Réunion des juges du concours oral des meilleurs étudiant(e)
    • CAP-NSERC Liaison Committee Meeting | Réunion du comité de liaison ACP-NSERC
    • R-MEDAL1 Brockhouse Medalist Talk | Conférence du lauréat de la médaille Brockhouse
    • R-MEDAL2 Vogt Medalist Talk | Conférence du lauréat de la médaille Vogt
    • 14:00
      Travel Time
    • FCC-Canada Collaboration Discussion Session | Séance de discussion sur la collaboration entre la FCC et le Canada
    • (PPD) R1-1 | (PPD)
      • 375
        The SuperCDMS experiment at SNOLAB

        The SuperCDMS SNOLAB experiment is a direct detection dark matter experiment using semiconductor crystal detectors operating at cryogenic temperatures. The experiment is located in SNOLAB, which is 2$\,$km underground in the Creighton mine at Sudbury, Canada. With low background from cosmic sources, SNOLAB is ideal for rare event searches. The experiment uses 24 detectors, made of silicon and germanium, comprising of two detector types: one with phonon channels that is operated at high voltage ($\sim$100$\,$V), utilizing the Neganov-Trofimov-Luke (NTL) effect to amplify the phonon signal and achieve a low energy threshold, and another with phonon and charge sensors that enables effective background rejection. The experiment will probe low-mass WIMPs, dark photon absorption, and Axion-Like Particles. With the detectors cooled to their operating temperature, the commissioning phase is underway, with science data-taking expected later this year. This talk gives an overview of the experiment, its science goals, and a status update of the experiment at SNOLAB.

        Speaker: Sukeerthi Dharani (University of British Columbia)
      • 376
        Status and Outlook of the AURORA Low Mass Dark Matter Detector

        We present the preliminary design of the Argon Ultra‑Radiopure Observatory for Rare‑event Analysis (AURORA), a proposed experiment using doped argon and targeting dark matter in the sub‑GeV mass range. The experiment will feature a dual-phase time projection chamber (TPC) instrumented with digital silicon photomultipliers and filled with argon mixed with a few ppm of dopants, such as xenon, trimethylamine (TMA), triethylamine (TEA), or trimethylgallium (TMG), pending planned characterization at Queen’s University. The TPC design maximizes the signal‑to‑background ratio in the keV and sub‑keV energy region, while the addition of dopants photosensitive to argon de‑excitation light will further enhance the ionization yield. This approach will enable, for the first time, exploration of dark matter as light as a few tens of MeV/c$^2$, reaching down to the neutrino fog after about one year of data taking.
        We will present the status of the AURORA design, progress on the simulation and validation plan and preliminary sensitivity projections that incorporate known systematics.

        Speaker: Shailaja Mohanty (Queen's University, Canada)
      • 377
        Background modelling at SuperCDMS

        SuperCDMS is a direct detection dark matter experiment located 2 km underground at SNOLAB, designed to have particular sensitivity to the dark matter mass region <10 GeV/c2. In order to claim discovery or set new limits on dark matter properties such an experiment must have a detailed understanding of the backgrounds expected at the facility, the response of the detectors to these backgrounds, and a robust statistical framework to compare them to what a dark matter signal might look like. This talk addresses the challenges and progress in developing such a framework for SuperCDMS in preparation for first data taking this year. It includes a discussion of the key backgrounds we expect to see at SuperCDMS and lays out a ROOT-based statistical analysis framework for first science results.

        Speaker: Madeleine Zurowski
      • 378
        In Situ Comparison of Silicon Photomultiplier Performance in Liquid Xenon with LoLX 2

        Liquid xenon (LXe) based detectors are a leading technology for low-background searches, including dark matter and neutrinoless double beta decay detection. Improving the sensitivity of next-generation experiments requires further reduction of radioactive backgrounds from detector components, motivating silicon photomultipliers (SiPMs) as a leading candidate to replace widely used photomultiplier tubes (PMTs).

        The Light Only Liquid Xenon (LoLX) Collaboration operates an upgraded small-scale detector (LoLX 2) at McGill University with a ~ 5 kg LXe target. The detector is designed to study scintillation properties of LXe and the production of Cherenkov radiation. Instrumented with a total of 80 SiPMs from two manufacturers, Hamamatsu and Fondazione Bruno Kessler, LoLX 2 also allows for the direct in situ comparison of photosensor performance in identical experimental conditions, using a vacuum-ultraviolet PMT for reference.

        This talk will present the results of the first experimental run of LoLX 2, including a comparison of the relative photon detection efficiency (PDE) and operational characteristics of the photosensors using laser calibrations and external gamma sources, highlighting a significant discrepancy of 33–38% in PDE for the Hamamatsu VUV-4 compared to PDE models from vacuum measurements. Supported by a comprehensive Monte Carlo simulation that includes photon transport and a detailed SiPM optical model, we resolved the discrepancy with an angular and wavelength-dependent PDE model incorporating surface shadowing effects. Prospects for the next upgrade of the system will briefly be discussed, including improved electronics for signal readout, the deployment of an internal radiation source, and the implementation of an online xenon purification system.

        Speaker: Frédéric Girard (McGill University)
      • 379
        Background simulation studies for the SuperCDMS experiment

        SuperCDMS is a direct detection dark matter (DM) experiment which is currently in its commissioning phase at the SNOLAB underground laboratory in Sudbury, Canada. It operates cryogenically cooled Ge and Si crystals with different sensor designs to perform a broadband DM search for particles with masses $\le 10\, \text{GeV}/c^2$, thereby exploring new regions of parameter space.

        Achieving this sensitivity requires an ultra-low background environment and a thorough understanding of the background composition. This relies on dedicated simulations providing sufficient statistics. SuperCDMS background simulations are based on Geant4 and cover radioactive contamination of materials, activation due to cosmogenic radiation, exposure to radon, accumulation of dust on surfaces and also cosmic muons.

        Of particular interest are long-lived radon daughters which can get implanted in material surfaces. Dedicated simulation studies have been performed to investigate the implantation of $^{210}$Pb in Ge and Si crystals.

        Another background study is focused on cosmic muons. Even though the muon flux deep underground at SNOLAB is only 0.27 $\mu$/m$^2$/day, muon interactions with the cavern rock or the experiment's materials can produce a large number of high-energy gammas or neutrons which may hit the detectors and mimic a potential signal.

        This talk will give an overview of SuperCDMS background simulations, which serve as the foundation for a comprehensive background model. I will also discuss the $^{210}$Pb implantation studies and give an example of how to simulate cosmic muons with Geant4.

        Speaker: Birgit Zatschler (Laurentian University, SNOLAB, University of Toronto)
    • (DPMB) R1-2 | (DPMB)
    • (DAMOPC) R1-3 | (DPAMPC)
      • 380
        Probing ultra-high-energy physics using the shape of a nucleus

        The nature of dark matter, and the origin of the matter-antimatter imbalance of the universe, are intriguing open questions in high-energy physics and cosmology. The breakdown of time-reversal symmetry (T) at ultra-high energy scales is believed to underlie one or both of these puzzles. However, these energy scales lie far above the reach of particle colliders. Fortunately, we can probe the breakdown of T-symmetry at such extreme energy scales using precision measurements of nuclei and atoms. In my lab, we study 153Eu, whose nuclear shape is highly sensitive to T-violation caused by physics outside the Standard Model. The response of the nucleus is further enhanced through electron-nuclear interactions in Eu3+ ions in a crystal. All of this results in characteristic energy shifts between hyperfine states that can be measured using precision radio and optical spectroscopy. I will present a gentle introduction to this program of research, and discuss my group's T-violation search experiments.

        Speaker: Dr Amar Vutha
      • 381
        Spin Magnetometry with Reinforcement Learning

        Quantum sensors achieve measurement precisions well beyond classical bounds, though in order to reach such sensitivities methods for quantum optimal control are often employed, particularly in systems where decoherence is important; however, optimizing the control of quantum sensors is made difficult by the fact that not all Hamiltonian parameters are known. Here we investigate the use of reinforcement learning (RL) for optimal control of a spin magnetometer, wherein a single spin evolves according to an unknown background field. We apply the soft actor-critic RL algorithm to learn a policy, from which a set of transverse control fields may be sampled to improve the sensitivity of the magnetometer in time, in the presence of decoherence. The agent is trained so as to maximize the quantum Fisher information of the spin with respect to the unknown field. We train the agent on numerical simulations of the system, and then apply the resulting agent to various conditions not seen in training. We find that the agent is sensitive to certain changes in the system Hamiltonian, but overall is able to generalize well, supporting the use of such algorithms in quantum optimal control problems.

        Speaker: Dr Logan Cooke (University of Ottawa)
      • 382
        Optimizing Diamond Betavoltaic Cells for Operation under Tritium Beta Radiation

        Betavoltaic energy conversion uses a semiconductor device to convert beta radiation (high-energy electrons) from a radioisotope to generate electrical power. Current challenges include designing semiconductor structures that minimize backscattered electrons, non-radiative carrier recombination, and maximize minority carrier diffusion lengths. Diamond is a promising material to tackle these challenges. In this work, we numerically modeled a diamond betavoltaic cell to convert beta radiation into electricity. We optimized the thickness and doping of the diamond structure to maximize the output power.

        Our numerical model uses Monte Carlo simulations to quantify energy deposition profiles as a function of depth in diamond. A custom software converts the energy density profile into an electron-hole pair generation rate profile. The generation rate profiles are sent to the software Sentaurus to assess the diamond betavoltaic cell performance under beta radiation by solving Poisson, drift, and diffusion equations. The model includes series and parallel resistances, doping-dependent carrier mobility, Schottky-Read-Hall recombination, and Auger recombination. Validation against experimental data demonstrates that our numerical model reproduces betavoltaic conversion mechanisms with high fidelity. We use a custom high-dimensionality particle swarm optimization algorithm to find optimal layer thicknesses and doping for maximum power output.

        Results show that diamond is suited to maximize energy absorption through reduced backscattered electrons, due to its lower atomic number and lower density compared to other semiconductors. Diamond can absorb 5% more energy than SiC, 18% more than GaN, and 25% more than GaAs. Optimal diamond structure enables a carrier collection that exceeds 95% over a wide range of beta particle energies. High carrier collection can be explained by large minority carrier diffusion lengths, which also limit bulk recombination. The primary carrier-collection losses are attributed to surface recombination, which is exacerbated by the absence of a wider-bandgap passivating semiconductor on the device surface. Optimal devices have a conversion efficiency of 30%, outperforming GaAs betavoltaic cells by an order of magnitude. A tolerance study demonstrates that significant variations in thickness and doping induce a maximum power reduction of at most 5%.

        Speaker: Mathieu de Lafontaine (University of Ottawa)
      • 383
        Time-domain investigation of strong-field driven core-electron associated recollsion physics

        Time-dependent response of strong-field driven multi-electron correlated photorecombination process is investigated by employing the perturbed trajectory measurement method. It is well-known that field-free xenon has a very large photoionization cross section around the photon energy of 80 eV as giant plasmonic resonance due to the electron-electron interaction between 4d and 5p orbitals. However, the time-dependent response of a xenon atom interacting with a strong laser field has not been investigated in attosecond time scale. Therefore, we perform the all-optical measurement of the strong-field photorecombination associated with xenon giant plasmon resonance and verify a large shift of recollisional emission time compared with the ordinary atto-chirp.
        We carry out numerical simulations using time-dependent density functional theory that reproduce the large deviation of recollision emission, resulting in an attosecond pulse influenced by the giant plasmon resonance of xenon. The model we use for an atom with many electrons is a superposition of metallic shells since xenon has 54 electrons with a huge volume. The attosecond pulse calculated with the modeled atom shows characteristic features of the measured dipole radiation, making in good agreement.
        We also confirm that the maximum photon energy of the attosecond pulse can exceed the cut-off estimated by the conventional theory based on the single active electron approximation. For this purpose, we calibrate the laser intensity via the cut-off photon energy of argon. Then, using the same argon-calibrated conditions, we realize that the maximum recollision energy goes over the theoretical limit, sometimes substantially, implying that the ion plays a role. We must include the plasmon’s Stark shift and whatever else might shift the resonance.
        We expect that the strong-field physics associated with core electrons will modify the response of atoms and molecules on sub-cycle time scale. Eventually, it will allow us to gain optical access to control core hole dynamics.

        Speaker: Dong Hyuk Ko
    • (DCMMP) R1-4 | (DPMCM)
    • (DTP) R1-5 | (DPT)
      • 384
        Supercritical Black Holes

        In conventional thermodynamic systems, supercritical behavior in systems such as van der Waals fluids and water is now under active study both theoretically and experimentally. Although liquids and gases become indistinguishable Beyond the critical point, forming a single fluid phase, it is possible to identify liquid-like' andgas-like' behaviour in the supercritical regime. Here I will discuss how to extend these considerations to black holes. I will show how an application of the Lee-Yang phase transition theorem can yield a comprehensive phase diagram for charged AdS black holes. Specifically, a generalization of the Widom line can be constructed -- as the system crosses this line in the supercritical region, the black hole goes from exhibiting liquid-like' behaviour togas-like' behaviour, without any discontinuities appearing in thermodynamic state functions.

        Speaker: Robert Mann
      • 385
        Not all four dimensional Black Holes can spin

        In the last decades, black hole solutions with non-standard topologies or non-standard asymptotic behaviour have gained considerable attention. An early example of this are the periodic static solutions in 3+1 dimensions, where infinitely many coaxial, equidistant, identical black holes are stacked in an array along the axis. Each solution is completely characterized by the area $A$ of the black holes and the distance $D$ between two consecutive black holes. These solutions are referred as periodic Schwarzschild since there is a single static horizon in the fundamental domain.

        A natural question, relevant to both the context of classification of stationary solutions and to the existence of solutions with non-standard topologies, is whether a periodic Kerr solution exists or not. In this talk, we will discuss different techniques used to answer this question, in particular a recent result that shows that a static Myers/Korotkin-Nicolai solution cannot be put into stationary rotation if $12 D < \sqrt{A}$. This result establishes a novel static rigidity in 3+1 solutions.

        Speaker: Dr Javier Peraza (Perimeter Institute for Theoretical Physics)
      • 386
        Self-regularized entropy: What does black hole entropy predict for tests of Kerr no-hair theorem?

        We compute the canonical (brick-wall) entropy of Hawking radiation in a quantum black hole whose exterior geometry, in the strong field region, is approximated, to first order in a small quadrupole parameter, by the static q-metric, which is an exact vacuum solution of the Einstein equations. Counting near horizon quasinormal modes shows that a modest quadrupolar deformation self-regularizes the ultraviolet divergence: the entropy of Hawking radiation is finite for any non-vanishing quadrupole, without an ad hoc cutoff. Matching this canonical entropy to the Bekenstein-Hawking entropy leads to no-hair violating multipoles, at percent-to-tens-of-percent level, and provides concrete observational targets for the Next Generation Event Horizon Telescope (ngEHT) and the Laser Interferometer Space Antenna (LISA).

        Speaker: Shokoufe Faraji
      • 387
        Fractional Skyrmions

        We establish the existence of Skyrmion-type solitons that have a Z_N topological quantum number. Thus, a configuration of N such Skyrmions have a topological quantum number that is equivalent to zero. It is reasonable to accord a fractional topological charge to such solitons, 1/N. Interacting with chiral fermions, the induced fermion number on these solitons will indeed be 1/N. We show that such solitons arise in the non-linear sigma model based on the manifold corresponding to the coset space SO(N^2-1)/SU_{adj.}(N). Crucial in this analysis is that \Pi_1(SU_{adj.}(N))=Z_N.

        Speaker: Prof. Manu Paranjape
      • 388
        Positive mass theorems for toric gravitational instantons

        A gravitational instanton is a four-dimensional, complete Ricci flat manifold with prescribed asymptotic behaviour (e.g. asymptotically locally flat). They arise in general relativity as saddle points of the Euclidean gravitational path integral. We will describe a new set of a mass comparison theorems for manifolds with non-negative scalar curvature. Their mass is bounded below the mass of a gravitational instanton `ground state' in the same asymptotic class.

        Speaker: Prof. Hari Kunduri (McMaster University, Mathematics and Physics)
      • 389
        Tunable quantum thermodynamic machine enabled by a diatomic molecular system

        We present quantum heat machines using a diatomic molecule modelled by a q-deformed Morse potential as a working medium. We analyze the effect of the deformation parameter and other potential parameters on the work output and efficiency of the quantum Otto and quantum Carnot heat cycles. Furthermore, we derive the analytical expressions of work and efficiency as a function of these parameters. Interestingly, our system operates as a quantum heat engine across the range of parameters considered. In addition, the efficiency of the quantum Otto heat engine is seen to be tunable by the deformation parameter. Our findings provide useful insight for understanding the impact of anharmonicity on the design of quantum thermal machines.

        Speaker: Collins Edet (Universiti Malaysia Perlis)
    • (PPD) R1-6 | (PPD)
      • 390
        Searches and constraints on beyond the Standard Model (BSM) physics with the ATLAS Detector

        The Standard Model is the reigning theory of fundamental particles and their interactions, and has held up to rigorous test at collider experiments up to and including the LHC. Despite this, the Standard model has known shortcomings, such as it's inability to explain the origin of dark matter, the matter-antimatter asymmetry of the universe, and the fine tuning of the Higgs Boson mass. Many beyond the Standard Model (BSM) theories have been proposed to address these shortcomings, and many such theories predict signatures directly accessible at the LHC. This talk will present selected results from the ATLAS full run-2 and/or partial run-3 dataset at a center of mass energy of 13/13.6 TeV, including constraints on SUSY, exotic signatures, and BSM models with Higgs and vector boson signatures.

        Speaker: John Patrick Mc Gowan (University of Victoria (CA))
      • 391
        Recurrence Relations and Dispersive Techniques for Precision Multi-Loop Calculations

        Ab initio predictions of two-loop electroweak contributions to observables are increasingly essential for precision collider experiments, yet their evaluation remains very challenging. We connect recurrence techniques and dispersive method in order to evaluate complex multi-loop Feynman diagrams. By expressing multi-point Passarino-Veltman functions in a two-point basis and using shifted space-time dimensions with recurrence relations, we minimize the number of required dispersive integrals. This approach reduces computation time and enables a precise and efficient
        analysis of one- and two-loop diagrams. This talk will highlight new developments coming from combination of recurrence approach with dispersive methods.

        Speaker: Dr Aleksandrs Aleksejevs (Memorial University of Newfoundland)
      • 392
        Constraints on on vector-like quark model from rare B decays.

        I examine the minimal vector-like quark model, where a single down-type vector quark is added to Standard Model in light of the latest experimental data on rare $B_(s)$ decays. It is shown that, considering the so far overlooked Higgs loop contributions, the imposes stringent constraint on the mass of the vector quark within the discovery range of LHC.

        Speaker: Mohammad Ahmady
      • 393
        Search for Gravitational Wave-Coincident Neutrinos in DEAP-3600.

        Gravitational wave observations from the Laser Interferometer Gravitational-Wave Observatory (LIGO) have been used to pinpoint the timing of several compact object mergers involving a neutron star over the past decade. These precise timestamps allow for more efficient searches for particle signals from a given merger by enabling tight coincidence-timing cuts that reduce background and allow sidebands outside these windows to be used for comparison. One such signal is neutrino emission- yet undetected but theorized to exist- which may be observable from Earth. DEAP-3600 is a liquid argon scintillation detector with minimal backgrounds designed to search for dark matter at very low energies. Its argon target allows for neutrino absorption interactions to be visible in the detector, providing a possible channel for observing neutron star merger neutrinos. During its second fill from 2017-2020, DEAP-3600 recorded data in conjunction with seven merger events observed by LIGO. This data was analyzed using an empirical background model with the goal of either identifying an interaction from a merger neutrino or refining exclusion limits set by other similar searches.

        Speaker: Daniel Huff (University of Houston)
      • 394
        Sub-GeV dark matter from cosmic ray bremsstrahlung in the atmosphere

        The null results in direct detection motivate the exploration of a broader mass range for thermal relic dark matter (DM) candidates. An intuitive new place to extend these models is to consider masses in the MeV to GeV range. One way to do this and avoid bounds from the Cosmic Microwave Background is to introduce a dark sector consisting of a dark mediator and stable DM candidate.
        In this project, we explore a way that inelastic cosmic ray collisions can produce a sub-GeV DM candidate with a boosted energy spectrum via proton bremsstrahlung. The production model uses the most recent form factors for dark vector production from initial state radiation including the fitting of resonant structures. The calculation for this production mode is much more similar to an accelerator calculation than other direct detection efforts with the twist of using the cosmic ray spectrum as a varying beam energy. The DM spectrum peaking at higher energies than the galactic halo allows lighter DM candidates to produce significant recoil signals in the detector. In particular, neutrino experiments are more sensitive to these events as the detector recoil energies lie well within the experiment’s higher thresholds. The larger exposure of such experiments gives them more coverage of the phase space for DM models with a dark photon mediator.

        Speaker: Branden Aitken
    • (DQI/DPE) R1-7 | (DIQDEP)
      • 395
        Preliminary Study on Open Educational Resources in Large Introductory Physics Courses

        Open Educational Resources (OER) are freely available teaching materials which can be used by instructors and their students to supplement or replace traditional large publisher textbooks and other resources. This can reduce cost, and increase accessibility, to the resources required for students in university physics courses. As introductory physics course enrollment grew over the past decades, large publisher textbooks have been common for both the textbook content, and the online software they provide to facilitate the submission and grading of course assignments (as traditional assignment submissions became unfeasible to grade with typical TA resources). Therefore a barrier to using OER in large introductory physics courses is a matter of both a) the course content and b) the grading resource problem. In this talk we will discuss the implementation of OER material into multiple large introductory physics courses, and different ways the grading resource problem has been addressed in these courses. We will then discuss preliminary results from a multi-year study on this implementation, for both Science and Engineering large introductory physics courses.

        Speaker: Mark Robert Baker (Western University)
      • 396
        Towards a Concept Inventory on Quantum Information

        Concept inventories like the FCI or the BEMA have been valuable tools in studying how well students understand physics concepts, mostly at the introductory level. A few have been created for more advanced topics including quantum mechanics, but work on creating a concept inventory on quantum information has started only recently.

        We will describe how we are developing our concept inventory, illustrate differences from the typical process due to the unique nature of the content and learning environments, and discuss a few surprising insights. We will also provide examples for how developing an instrument like this can inform course design and the creation of meaningful learning outcomes.

        Speaker: Shania Smagh (Simon Fraser University)
      • 397
        Bringing Physics to Life: Free Interactive Resources for Educators and Students

        The talk will introduce a new initiative co-designed in consultation with our core partners, TRIUMF and Cenovus, and funded by the NSERC Chairs in Inclusion in Science and Engineering program and Cenovus Energy. The program explores STEM career pathways and provides curriculum-aligned resources for educators, while linking students to diverse role models - particularly those from equity-deserving groups. Its overarching goal is to strengthen physics education in rural and remote communities across Atlantic Canada at the intermediate and high-school levels, while inspiring more students to consider future careers in science and engineering.

        We will introduce a growing collection of free, highly interactive lesson materials that connect core physics concepts to real-world applications and in-demand STEM careers. These resources are designed for flexible use in both online and in-person classrooms, whether delivered by teachers or by guest presenters. Spanning Grade 8 through Level 3 across Atlantic Canadian provincial curricula, the lessons address topics such as energy, electricity, forces, waves, and modern physics, and explicitly highlight how these ideas are used in industry, research, and everyday technologies.

        The materials are developed using the Lumio digital platform to promote hands-on, inquiry-based, and highly engaging learning experiences. The session will feature a live demonstration of selected Lumio-based physics lessons, allowing participants to experience the activities from a learner’s perspective. It will also mark the official launch of a new public website that hosts the curriculum materials, related physics resources, and tools that connect teachers with STEM role models. Throughout the session, participants will be invited to provide feedback on both the website’s usability and the pedagogical effectiveness of the interactive resources. This input will directly inform future development, ensuring that the program continues to evolve in ways that best support educators and learners across Atlantic Canada.

        Speaker: Dr Svetlana Barkanova (Grenfell Campus of Memorial University)
      • 398
        Understanding Quantum Algorithms with Board Games

        The difference between quantum and classical algorithms can be difficult for students to understand, especially when problems being tackled seem abstract and efficiency scaling factors obscure realistic implementation overheads. For introductions and informal education, games provide a way for learners to develop intuition for physical and mathematical concepts by role-playing. We will present an activity that explores the quantum search (Grover’s) algorithm as a player-vs-player board game, where learners role-play as a classical and quantum computer to try to find marked elements as quickly as possible, intended for high-school students and early undergraduates. We will present how the game incorporates fundamental elements of quantum information science like probability, decoherence, quantum error correction, and measurement collapse, how it demonstrates the scaling of the classical and quantum solutions relative to each other, and the limitations of the analogies used in the game. Feedback from workshops using the game with high-school students, undergraduates, and teachers will be presented. All materials are open-source and possible to print at home, with custom 3D-printed die models available. And yes, we will have copies available to play with.

        Speaker: John Donohue
    • (DSS) Surface Science R1-8 | Science des surfaces (DSS)
      • 399
        Enabling Atomically Precise Fabrication Through Inverted Mode STM

        In this talk, based on recently published work [1], we introduce a novel scanning tunneling microscopy (STM) method called inverted-mode STM (IM-STM), an approach that offers a fundamentally new way to do STM and enables atomically precise fabrication via mechanosynthesis. Performing reproducible manipulation of covalently bonded atoms requires control over the atomic configuration of both sample and probe – a longstanding challenge in STM. By replacing the traditionally sharp STM tip with an annealable Si probe and using tailored organic molecules deposited on a Si(100) surface as mini-tips to image this probe apex, IM-STM effectively solves this problem and provides the necessary control of both sides of the tunnel junction.

        These molecules are designed to react with the probe apex; the two sides of the tunnel junction can act as chemical reagents, which can be positioned with sub-angstrom precision. This allows the Si probe apex to be utilized as a "build site": an atomically-flat, crystalline mesa where abstraction or donation of atoms from or to the probe is possible, via interactions with the surface-bound molecules. The geometry of the probe apex further enables multiple molecules to sequentially interact with the same build site. We demonstrate this by using a novel alkynyl-terminated molecule to reproducibly abstract hydrogen atoms from an H-passivated Si(100) probe apex. This approach is expected to extend to other elements and moieties, opening a new avenue for scalable atomically precise fabrication with mechanosynthesis.

        [1] E. Barrera et al., “Inverted-mode scanning tunneling microscopy for atomically precise fabrication,” arXiv:2512.24431 [cond-mat.mes-hall] (2025), https://doi.org/10.48550/arxiv.2512.24431 (submitted for peer review).

        Speaker: Mr Bheeshmon Thanabalasingam (CBN Nano Technologies Inc.)
      • 400
        Atomically Precise Silicon Abstraction by Inverted-Mode STM

        Direct 3D manipulation of covalently bonded atoms remains a challenge for atomically precise fabrication. Here, we introduce inverted-mode scanning tunnelling microscopy (IM-STM) [1] as a new approach for controlled atomic-scale reactions and demonstrate its application to individual silicon atom abstraction under ultra-high vacuum and cryogenic conditions. A silicon probe chip (SPC) with an atomically clean Si(100)-2x1 crystalline terrace at the apex serves as the probe, while a silicon wafer bearing isolated, custom-synthesized, surface-bound molecular tools acts as the sample. These molecular tools function both as imaging agents and as tools for chemical manipulation in a mechanosynthesis-based process. As the sample is scanned with the SPC, each protruding molecule provides a mirror image of the probe apex, and can immediately participate in surface reactions, enabling rapid verification and repeatability. For subtractive silicon patterning we employ MAOC-C2I , a tripodal molecule featuring an ethynyl iodide (-C2I) functional group. After electrical bias pulse-induced cleavage of the iodine, the resulting C2 radical is aligned with a target Si(100)-2x1 dimer of the SPC. A controlled approach-retraction process transfers one silicon atom to the molecule, akin to “pick-and-place” fabrication. This leaves unique silicon vacancies at the target site, which we describe with IM-STM imaging and density functional theory (DFT) calculations. Imaging with new molecules elsewhere on the sample surface confirms changes to the SPC lattice, and allows iterative targeting for the next abstraction, thus enabling a new capability towards atomically precise fabrication, and general manipulation of covalently-bonded silicon atoms in 3D.

        1. Barrera, E. et al. Inverted-Mode Scanning Tunneling Microscopy for Atomically Precise Fabrication. arXiv (2025) doi:10.48550/arxiv.2512.24431.
        Speaker: Dr Rosemary Cranston (CBN Nano Technologies Inc.)
      • 401
        The effect of surface Plasma treatment on the oxidation states of Ge and SiGe

        In this work we study the oxidation surface states at SiGe/Al2O3 and Ge/Al2O3 interface using X-ray photoelectron spectroscopy. The effect of different pre-deposition chemical cleaning techniques as well as an in-situ H2 plasma treatment was investigated on the oxidation states of Ge and Si at the interface. One would expect that H2 plasma cleaning could significantly suppress the SiO and GeO oxide formation by passivating the surface. We demonstrate that depositing Al2O3 directly on top of SiGe cause the non-stoichiometric SiO and GeO formation that could be detrimental to ultimate device performance due to introducing of dangling bonds at the surface. These dangling bonds could act as trap states and reduce the mobility of the carriers in the channel. Using a germanium cap layer on top of SiGe before the Al2O3 deposition can reduce some of the non stoichiometric oxide formation. We also show that an in-situ H2 plasma cleaning of the surface prior to oxide deposition can reduce the formation of trap forming oxides significantly. Our findings can be useful for device fabrication technology based on the strained germanium on silicon material platform.

        Speakers: Dr Shahin Honari (National Research Council Canada), Mr Yousef Karimi Yonjali (National Research Council Canada)
      • 402
        Modifications of flexible organic semiconducting polymer thin films upon folding

        Not only are thin films of organic semiconducting polymers on flexible plastic substrates (such as polyethylene terephthalate, PET) suitable to several applications in organic electronics, but they also present intriguing mechanical properties due to their variable degree of crystallinity, thickness, and stacking of polymer chains. These properties are controlled by the degree of tacticity of the polymer, preferential alignment of the polymer by the substrate surface, and -electron related van der Waals interactions between polymer repeating units belonging to different polymer filaments. Nonetheless, the effect of substrate folding on the mechanical properties of organic semiconducting polymer thin films are poorly understood because of the lack of detailed structure-function relationship studies. In this study, we will use a multi-technique approach based on atomic force microscopy, Raman spectroscopy and X-ray diffraction to investigate the role of substrate folding in the formation of ridges on organic thin films of poly(3-hexylthiophene) (P3HT) at different thickness, concentration of crystallization additives (including 1-dodecanethiol), and annealing temperature. Through Raman spectroscopy, we find that ridges formed upon bending may be either more crystalline or more disordered than the surrounding layer depending on the additive concentration. It is also found that taller and more prominent ridges are formed at increasing concentrations (up to 10% vol) of crystallization additive upon first-time folding of the polymeric thin film, while second-time bending in the orthogonal direction leads to shorter and less prominent ridges. This phenomenology may be the result of the interplay of shear-induced amorphization and additional crystallization due to 1-dodecanethiol diffusion in P3HT upon bending. Possible models towards the understanding of these phenomena will be discussed. Collectively, our work elucidates the critical role of mechanical properties of organic polymer thin films for their applications in flexible electronics devices, such as organic photovoltaics, light emitting diodes and thermoelectric generators.

        Speaker: Xin Wei
      • 403
        Interface Engineering in P3HT:PCBM OPVs via TRT-Transferred Graphene Oxide and Tunable LiF Nanoparticle Deposition

        Graphene oxide (GO) is a promising electron transport layer for organic photovoltaics (OPVs) due to its tunable surface chemistry and interfacial properties. In parallel, LiF nanoparticle interlayers are widely used to improve cathode contact behavior, as they provide morphological properties which contribute to energy level alignment in these devices. Here, we present a study which tests the performance of thermal release tape (TRT)-assisted GO transfer printing, both as an ETL and to transfer LiF nanoparticles under an Al electrode when spin coated on top of GO.

        Devices are fabricated with the structure ITO/PEDOT:PSS/P3HT:PCBM/GO/LiF/Al. GO is introduced on top of the photoactive layer while also preserving the integrity of the active organic film. We compare multiple TRT types and release conditions to evaluate their influence on GO continuity, adhesion, O to C ratio, and effective thickness. LiF NPs are synthesized using a reverse micelle approach, where key processing parameters, including micelle loading ratio and spin-coating conditions, are systematically varied to tune NP density and distribution and relate them to different interlayer mechanisms.

        Finally, photovoltaic performance in devices is measured under simulated AM1.5G illumination using current–voltage characterization to extract power conversion efficiency (PCE), short-circuit current density (Jsc), open-circuit voltage (Voc), and fill factor (FF). Trends in series and shunt resistance are analyzed to connect electrical performance to interfacial transport and recombination. By mapping device metrics against GO and LiF printing variables, this study establishes potential for carbon-based ETLs combined with nanoparticle-enabled cathode interface engineering.

        Speaker: Indra Guzman Moreno (Concordia University)
      • 404
        Additive Mechanosynthesis of C2H Moieties on Si(100):H Using Inverted-Mode STM

        Atomically Precise Fabrication (APF) is the creation of covalently-bonded structures through addition, subtraction, or manipulation of atoms or small molecules. Our approach to APF is Inverted-Mode Scanning Tunneling Microscopy (IM-STM), a mechanosynthesis-based technique that uses tailored 3D molecules deposited on a sample to scan and react with a flat, crystalline probe, enabling reagent transfer to and from the probe apex with sub-angstrom precision.[1,2]

        Here, we demonstrate the transfer of C2H and C2H2 on a Si(100):H probe apex and present a novel method of determining product atomic configurations. On-surface, covalently bound C2H and C2H2 were created from the reaction of dangling bond (DB) patterns on the probe with custom 3D molecules (EAOGe-C2I) presenting either an upright C2 or C2H moiety, or “feedstock”. As these products fall outside of the limits of detection of most on-surface spectroscopic methods, we developed a method that leverages established radical chemistry trends to determine the atomic configurations of the products.

        By varying DB patterning, targeting, and trajectory, transferred feedstock can form singly-bound C2H, C2 + dihydride, and on- and inter-dimer C2H2 products on the probe. Product atomic configurations were determined by interconverting the products using mechanosynthetic reactions, supported by DFT and DFTB+ simulations. Mechanistic hypotheses suggest that mid-trajectory hydrogen tunneling plays a role in reaction selectivity.

        Our ability to positionally control feedstock transfers and perform sub-monolayer characterization represents a significant advancement in APF, opening new avenues for development of complex structures at the atomic scale.

        Track: Surface Science

        Keywords: Atomically precise fabrication, STM, Semiconductors

        [1] T. Huff et al., “Molecular tools for non-planar surface chemistry,” arXiv:2508.16798 [cond-mat.mtrl-sci] (2025), https://doi.org/10.48550/arXiv.2508.16798 (submitted for peer review).

        [2] E. Barrera et al., “Inverted-mode scanning tunneling microscopy for atomically precise fabrication,” arXiv:2512.24431 [cond-mat.mes-hall] (2025), https://doi.org/10.48550/arxiv.2512.24431 (submitted for peer review).

        Speaker: Sam Rohe (CBN Nano Technologies Inc.)
      • 405
        Inverted-Mode Scanning Tunneling Microscopy as a Route Towards Atomically Precise Solid-State Quantum Technologies

        Solid-state qubit systems such as T centres and donor spin qubits in silicon are atomic architectures of increasing interest as promising routes toward scalable quantum technologies. These systems can exploit nuclear and electronic degrees of freedom, are optically addressable, can operate at room temperature [1,2] , and naturally exhibit sought-after quantum behaviour such as long spin coherence times [1]. These architectures are compatible with solid-state optical and electronic devices, which connects them to the vast capabilities and commercial importance of the semiconductor industry, yet challenges in their fabrication persist [2]. Most notably, there exists a capability gap in reliably positioning target atoms or few-atom functional groups with atomic-scale precision.
        Here, we apply Inverted-Mode Scanning Tunneling Microscopy (IM-STM) [3], an approach which has the potential to address fabrication limitations by enabling deterministic placement of covalently-bound atoms within crystalline hosts: atomically-precise mechanosynthesis. IM-STM reverses the traditional roles of probe and sample, using flat, crystalline Si(100) mesas as probes and customized 3D molecules deposited on a Si substrate as local STM imagers. These 3D molecules enable atomically precise modification of the “probe”, including donation and abstraction of atoms to and from the surface. We demonstrate these capabilities through the orientated placement of C2H moieties on Si(100) probes into two stable surface configurations. C2H moieties play a key role in quantum applications, serving as the main constituent of T centres. Structures containing up to eight C2H units are presented, substantiating the ability to construct complex, tailored atomic structures using IM-STM. Mechanically controlled chemical reactions such as these offer a new fabrication technique to support the needs of the solid-state qubit community, where coherence, readout fidelity, and interqubit coupling strengths are strongly dependent on placement accuracy and local electronic environment.

        [1] Zhang, G., Cheng, Y., Chou, J.-P. & Gali, A. Material platforms for defect qubits and single-photon emitters. Appl. Phys. Rev. 7, 031308 (2020).
        [2] Chatterjee, A. et al. Semiconductor qubits in practice. Nat. Rev. Phys. 3, 157–177 (2021).
        [3] Barrera, E. et al. Inverted-Mode Scanning Tunneling Microscopy for Atomically Precise Fabrication. arXiv (2025) doi:10.48550/arxiv.2512.24431.

        Speaker: Hadiya Ma (CBN Nano Technologies Inc)
    • (PPD) R2-1 | (PPD)
      • 406
        In situ calibration of liquid scintillator in SNO+

        Deep beneath the Canadian Shield, within the confines of the SNOLAB underground laboratory situated at the Vale-Creighton mine southwest of Sudbury, Ontario, lies the SNO+ experiment. This experiment, the successor to the SNO experiment, aims to observe neutrino-less double beta decay. SNO+ has been operational as a liquid scintillator neutrino detector since 2022. Specific hardware had been designed to maintain the strict radiological background mitigation criteria of the experiment. Prior to the recent assembly of this equipment the existing backgrounds, integrated over several month periods, were used to obtain the sets of decay events required to evaluate the detector’s behaviour. The completion of the deployment hardware has enabled the collection of a statistically significant calibration dataset spanning multiple deployments, totalling an aggregate of 16 days.

        Throughout these deployments, the position response and accuracy of the deployment system were evaluated, providing feedback to the hardware development. The excellent accuracy of these systems; better than the detector resolution; will be presented. In addition the detector backgrounds and radon levels were also monitored throughout the deployment procedures, successfully maintaining the cleanliness goals of the experiment. This presentation will summarize all of the measurements conducted during the deployment period and provide prospects for forthcoming analyses based on these deployments.

        Speaker: Dr Ryan Bayes (Queen's University)
      • 407
        Time-Charge HEALPix Direction Reconstruction Fitter for Supernova Neutrinos in Super-Kamiokande

        Supernova (SN) localization from water-Cherenkov neutrino detectors is critical for capturing early optical observations of the next galactic SN, as neutrinos are the earliest observables arriving well before shock breakout. SN neutrino bursts detected by Super-Kamiokande (SK) produce thousands of PMT time-charge (TQ) signals which contain directional information. Our current direction reconstruction pipelines at SK reconstruct the vertex, energy, and directions of all events, enabling highly resolved but computationally expensive real-time SN localization. However, our current event-level reconstruction increases the alert latency of SN directions when sending notices to the General Coordinates Network (GCN). We are developing a new direction fitter that maps individual PMT hit TQ data onto a HEALPix sphere, directly extracting directional information without relying on individual event reconstruction. In this presentation, we describe the characteristics of the TQ hit map and the directional analysis techniques used to optimize the latency of SN localization. Preliminary results show that direction reconstruction using the TQ fitter is done in O(1 sec) compared to O(90 sec) with our current reconstruction fitters, though consistent directional accuracy requires further development.

        Speaker: Nikolas Boily (University of Regina)
      • 408
        Multi-PMT detectors for Hyper-Kamiokande

        Multi-PMT detectors for Hyper-Kamiokande

        The Hyper-Kamiokande Far Detector is a 258 kilotonne water Cherenkov detector under construction in Kamioka, Japan. The experiment will use accelerator, atmospheric, and astrophysical neutrinos to search for new physics with unprecedented statistical accuracy. Light from neutrino interactions is detected by 20,000 20" PMTs and 800 multi-PMT detectors, which are an important part of the calibration of the detector. The Canadian LED-mPMT variants contribute to the suppression of the systematic errors of the experiment. The mPMTs provide increased angular resolution and are less vulnerable to stray magnetic fields, allowing accurate calibration of the Far Detector and increased energy resolution. The LED-mPMT variant includes 5 ultra-fast LED-based light injectors with custom collimators and diffusers. This will aid in measuring the optical properties of the ultra-pure water and determine accurately the angular response of the 20" PMTs in Hyper-Kamiokande.

        In this talk I will present the LED-mPMT detectors being assembled and commissioned at Carleton University. I will describe their operation at the Far Detector and the unique role they play towards the control of systematic errors in the Hyper-Kamiokande Experiment.

        Speaker: Robert Collister (Carleton University)
      • 409
        Track Matching and Momentum Reconstruction at the DUNE Near Detector

        The Deep Underground Neutrino Experiment (DUNE) is a next-generation particle physics experiment designed to study neutrino oscillation (in which neutrinos of one flavour transform into other flavours as they travel) alongside a broad program of other research. It will consist of two detector facilities, the Near Detector (ND) at the Fermi National Accelerator Laboratory (Fermilab) near Chicago, and the larger Far Detector (FD) which will be built 1300 km away in Lead, South Dakota.

        The ND will characterize the initial energy spectrum and flavour makeup of the neutrino beam produced at Fermilab prior to its propagation through the ground to the FD, which will measure the extent to which the population of each flavour type within the beam has changed. In the first phase of DUNE's operations, both the ND and FD will use Liquid Argon Time Projection Chambers (LArTPCs) to detect and measure the energy of the neutrinos. Should a neutrino interact within the liquid argon, the resulting charged particles will produce both scintillation light and trails of ionization as they pass through the medium. Muon neutrinos will produce muons, which tend to leave long, signature tracks in the LArTPC. The momentum of a muon can be calculated from the length it travels before stopping, which allows the energy spectrum of the initial muon neutrino beam to be characterized. The LArTPC at the DUNE ND (ND-LAr) will detect the scintillation light and the freed electrons from the ionized argon and reconstruct three-dimensional particle traces from them. However, since muons are minimally-ionizing particles, at DUNE's energy range most muon tracks will be longer than ND-LAr itself. On its own, ND-LAr would only see a fraction of the track length of these uncontained muons, and would underestimate their momenta.

        The Muon Spectrometer (TMS) is a scintillator detector located downstream of ND-LAr that will stop a significant number of the muons that escape ND-LAr while tracking their trajectories. The detector can also distinguish the charge sign of the muons. Muons that stop within TMS have a known total track length and momentum, provided that tracks belonging to the same muon in ND-LAr and TMS are correctly matched. This talk will outline methods being developed to match tracks between the components of the ND and illustrate how muon momentum can be reconstructed from these tracks.

        Speaker: Quinton Weyrich (York University)
      • 410
        Neutron backgrounds for ARGO

        The Global Argon Dark Matter Collaboration (GADMC) is designing a direct detection dark matter experiment using liquid argon, named ARGO. ARGO will utilize 400 tonnes of underground argon, with a fiducial volume of 300 tonnes. The activity of long-lived beta-emitter $^{39}$Ar will be reduced by at least a factor of 1000 in underground argon compared to atmospheric argon. This reduction decreases electron-recoil backgrounds in the dark matter search region, making it feasible to scale up to the ARGO detector which is two orders of magnitude larger than the currently operational DEAP-3600 experiment.

        Since neutrons can produce nuclear recoils through elastic scattering and mimic dark matter signals, mitigating this background is crucial during the early stages of detector designs. We have performed detailed simulations using the RAT software framework, based on GEANT4 and ROOT, to assess radiogenic neutron backgrounds and develop strategies for their mitigation. In my talk, I will present the latest results from this study, with the goal of achieving less than one neutron leakage for a 3000 tonne·year exposure of ARGO.

        Speaker: Susnata Seth (Carleton University)
    • (DPMB) R2-2 | (DPMB)
    • (DAMOPC) R2-3 | (DPAMPC)
      • 411
        High Precision Theory and Experiment for the Rydberg P-States of Helium up to n = 35

        High precision variational calculations in Hylleraas coordinates are presented for all singlet and triplet $P$-states of helium up to principal quantum number $n = 35$ with a uniform accuracy of 1 part in $10^{22}$ for the nonrelativistic energy. Mass polarization, relativistic and quantum electrodynamic effects are included to achieve a final accuracy of $\pm$1 kHz or better for the ionization energy of the Rydberg states of $^4$He in the range $24\le n \le 35$. The results are combined with 11 transition frequency measurements of Clausen et al.\ Phys.\ Rev. A 111, 012817 (2025) to obtain complementary measurements of the ionization energy of the $1s2s\;^3S_1$ state that do not depend on quantum defect extrapolations to the series limit. The result from the triplet spectrum yields an ionization energy of 1152\,842\,742.728(6) MHz, which agrees with but is larger than the experimental value by 14 $\pm$17 kHz. However, it confirms a much larger 9$\sigma$ discrepancy of $0.468\pm0.055$ MHz with the theoretical ionization energy of Patkos et al. Phys. Rev. A 103, 042809 (2021). The results provide a test of the quantum defect extrapolation method at the level of $\pm$17 kHz [1].

        [1] G.W.F. Drake, A.T. Bondy, O.P. Hallett and B.C. Najem, Phys. Rev. A 113, 012810 (2026).

        Speakers: Dr Aaron Bondy (Drake University), Gordon Drake (University of Windsor)
      • 412
        Amplitude squeezing with pulsed second harmonic generation

        We introduce a novel computational method for simulating amplitude squeezing in an optical parametric amplifier (OPA), allowing us to include some of the quantum properties of the pump pulse. The theoretical treatment of squeezing in an OPA is usually performed by assuming that the pump pulse is classical and unchanging, with some methods including the effects of changing pump pulse amplitude. The quantum aspects of the pump are usually ignored. This is a good approximation for squeezed vacuum generation, since the quantum fluctuations of the pump do not couple to the squeezed vacuum state. Quantum fluctuations of the pump are important when there is a nonzero input signal to the OPA, as is the case for amplitude squeezing.

        Our method models some of the quantum properties of the pump and signal by invoking the Gaussian approximation for both the signal and pump quantum states. We calculate the first and second moments of the quantum states using a pseudo-spectral evolution of the equations of motion. This allows us to study the 3D multimode properties of the bright squeezed states produced by an OPA, including the change in the amplitude of the pump pulse, and the quantum fluctuations of the pump pulse.

        Second-harmonic generation (SHG) can produce a second-harmonic field that is itself squeezed. We study pulsed SHG and characterize the maximum amplitude squeezing that can be obtained in both the fundamental and second harmonic pulses. The fact that the second harmonic pulse is squeezed is relevant to amplitude squeezing, since most experimental implementations use SHG to produce the pump pulse that is used in the OPA. We further study the impacts of using a squeezed second-harmonic pulse on the amplitude squeezing performance. Both of these studies are made possible by the development of our novel computational method.

        Speaker: Peter Rose (University of Ottawa)
      • 413
        Large Momentum Diffusion from the Dipole Force of Travelling Waves

        In the presence of a resonant light field, atoms of a gas spontaneously scatter photons in random directions, leading to a momentum diffusion $D$ that causes the temperature to increase at a rate proportional to the spontaneous emission rate. When light is detuned from the atomic resonance by a frequency $\Delta$, the scattering rate decreases rapidly resulting in $D_A \propto 1/\Delta^2$. In a standing wave configuration, atoms redistribute photons through stimulated emission and experience a dipole force proportional to $1/\Delta$. However, this redistribution process does not lead to momentum diffusion, as photons from a standing wave state $\big[ |+\rangle + |-\rangle \big]/\sqrt 2$ can only be transferred between counterpropagating components of the same quantum state. Since the average momentum of the state is zero, there is no momentum diffusion, indicating that the force is conservative. This property allows for experimental trapping of cold atoms with a very low heating rate.

        This work addresses the case where the dipole force is generated by independent travelling waves in the weak field limit, described by a statistical mixture of photons within a reservoir. In this context, photon redistribution between travelling waves causes a momentum kick in a random direction, resulting in a dipole force that is no longer conservative. The irreversible process yields a diffusion coefficient proportional to the stimulated emission rate $D_B \propto 1/\Delta$, which is in general much larger than in the case of a standing wave. This can significantly increase the temperature of atoms interacting with broadband radiation.

        We examine the implications of this large heating rate on the temperatures of Earth’s and stellar atmospheres, as well as for general heating processes in astrophysics. Additionally, I propose an experiment using an atom interferometer that could detect momentum diffusion of atoms resulting from the travelling waves of independent counter-propagating beams generated by broadband laser sources.

        Speaker: Louis Marmet (York University)
      • 414
        Pure and Mixed State Entanglement Dynamics in Tavis–Cummings Model with Squeezed Coherent Thermal States

        We investigate how environmental noise and atomic state purity jointly shape entanglement dynamics in a two-atom cavity QED system described by the single-mode Tavis–Cummings model. The atoms are initially prepared either in maximally entangled Bell states or in mixed Werner states, while the cavity field is modeled by generalized single-mode squeezed coherent thermal states, allowing thermal and quantum noise to compete on equal footing. Atom–atom and atom–field entanglement are characterized using concurrence and negativity, respectively.

        We show that thermal photons strongly degrade entanglement and prolong entanglement sudden death, whereas squeezing counteracts decoherence and enhances entanglement resilience. A central result is the striking qualitative difference between pure and mixed atomic states: Bell states not only sustain stronger entanglement but also actively induce nonclassical field states, while Werner states suppress both entanglement and field nonclassicality, even in the presence of squeezing.

        We further demonstrate that interaction mechanisms such as dipole–dipole coupling, detuning, and Kerr nonlinearity provide powerful tools for controlling entanglement flow. In particular, Kerr nonlinearity enables tunable redistribution of entanglement between atomic and atom–field subsystems, with the direction and efficiency of transfer governed by the initial atomic mixedness.

        Our results highlight atomic state purity as a key control parameter for protecting and steering entanglement in noisy light–matter systems, with implications for realistic cavity-based quantum technologies.

        Speaker: Kous Mandal (University of Windsor)
      • 415
        Quantum dark bands and dynamical phase transitions

        Traditionally, phase transitions are considered in equilibrium. However, over the past decade there has been considerable interest in non-equilibrium or dynamical phase transitions (DPTs) where time plays the role of the control parameter and the transition happens at a critical time. DPTs have been primarily studied in the context of quantum many-body systems, including in experiments on cold atoms in optical lattices and also trapped ions. In this talk I will show an unexpected connection between DPTs and an everyday natural phenomenon: rainbows. We will see that Alexander's dark band (the dark band between the primary and secondary bows in rainbows described by Alexander of Aphrodisias around A.D. 200) is essentially the same phenomenon as a DPT.

        Speaker: Dr Duncan O'Dell (McMaster University)
    • (DCMMP) R2-4 | (DPMCM)
    • (DQI) R2-5 | (DIQ)
      • 416
        From Testing Reality to Certifying Randomness: The Leggett–Garg Trilogy and QRange

        The Leggett–Garg inequality (LGI) was originally proposed as a way to test “macrorealism”: the classical intuition that a system always possesses definite properties, whether or not we look. In this talk, I will describe how a trilogy of experiments on LGI takes us from precision tests of quantum reality to a practical route for certifying randomness, culminating in our QRange quantum random number generator.
        In the first part, I will focus on a single-photon interferometric architecture in which we realise one of the most loophole-tight tests of macrorealism to date [1]. By combining LGI and related No-Signalling-in-Time conditions with careful control of measurement invasiveness, detector inefficiencies, multiphoton contamination and preparation/coincidence loopholes, we obtain a clear and robust violation of macrorealist bounds for a genuinely single system evolving in time.
        The second part explains how such temporal correlations can be turned into a resource. By mapping the observed LGI violation to a lower bound on min-entropy, we obtain semi– device-independent guarantees on the unpredictability of the measurement outcomes, and demonstrate LGI-based quantum random number generation in a photonic platform [2]. Finally, I will show how the same temporal-correlation framework can be implemented on contemporary noisy quantum processors [3], providing a platform-agnostic route to certified randomness.
        I will conclude by outlining how these ideas are engineered into QRange, a deployable QRNG module in which “testing reality” via LGI becomes the underlying certificate for high-quality randomness, with applications to cryptography, simulation and AI-oriented workloads.
        1. Loophole free interferometric test of macrorealism using heralded single photons, K.Joarder, D.Saha, D.Home, U.Sinha, PRX Quantum, 3, 010307, 2022.
        2. Single system based generation of certified randomness using Leggett-Garg inequality, P.P.Nath, Debashis Saha, Dipankar Home, U.Sinha, Physical Review Letters, 133, 020802, 2024.
        3. Certified random number generation using quantum computers, P.P. Nath, A.Sinha, U.Sinha, Frontiers in Quantum Science and Technology, Vol.4, 2025, https://doi.org/10.3389/frqst.2025.1661544

        Speaker: Prof. Urbasi Sinha (University of Calgary and Raman Research Institute, Bengaluru)
      • 417
        Stochastic Modeling of a Memory-Assisted Measurement-Device-Independent Quantum Key Distribution System in Fiber Optics Metropolitan Environments

        The vision of a global quantum internet is at once ambitious and enthralling, but building metropolitan quantum key distribution (QKD) networks faces real world challenges, including atmospheric turbulence, channel loss, and the need for synchronization between users.
        The application of metropolitan-scale quantum key distribution (QKD) networks is a critical step toward global quantum-secure communications. This talk presents a stochastic model for predicting key rates in a Memory-Assisted Measurement-Device-Independent QKD (MA-MDI-QKD) system operating over fiber optic cables. By incorporating asynchronous quantum memory loading through a Monte Carlo-based global/local clock scheme, the model accounts for real-world parameters such as fibre losses, detector efficiencies and dark counts, and quantum memory efficiency and coherence time. Simulations for metropolitan distances of 10–50 km demonstrate that asynchronous MA-MDI-QKD outperforms direct BB84 and synchronous MDI-QKD protocols, particularly at longer ranges where the memory-assisted architecture reduces the effective transmission distance by half. The tool provides a practical, open-source framework for designing and optimizing MA-MDI-QKD networks in urban environments, offering valuable insights for the integration of quantum communications into existing metropolitan infrastructures.

        Speaker: Amber Hussain (Carleton University)
      • 418
        Stochastic Modeling of a Memory-Assisted Measurement-Device-Independent Quantum Key Distribution System in Free-Space Metropolitan Environments

        The implementation of metropolitan-scale Memory-Assisted Measurement-Device-Independent Quantum Key Distribution (MA-MDI-QKD) stands as a crucial milestone on the pathway to global quantum secure communication. In this presentation, I will present a simplistic and intuitive stochastic model to predict key distribution rates in a MA-MDI-QKD scheme that addresses the real world parameters inherent to free-space quantum communication channels. Specific to my algorithm, the memory-assisted based system allows us to leverage the advantage of asynchronously loaded quantum memory when predicting the distribution rates. Specifically, by focusing on metropolitan distances, I will perform simulations tailored toward a system based on free-space links and field-deployable quantum memory. The model shown will predict key rate distributions over ranges of 10- 50 km for a set of atmospheric-based parameters and selection of QM efficiencies and coherence times. This tool provides impactful insights into the deployment and optimization of practical MA-MDI-QKD networks in urban environments. This streamlined approach is a valuable addition to existing quantum network simulators for the smooth integration of quantum networking into the field of communications engineering.

        Speaker: Fares Nada (Aerospace Engineering Student at Carleton University)
      • 419
        Discussion
    • (PPD) R2-6 | (PPD)
      • 420
        PIONEER: A next-generation rare pion decay experiment

        PIONEER is a rare pion decay experiment that will run at the Paul Scherrer Institute (PSI) in Switzerland. In its initial phase, the primary objective is to significantly improve the measurement of the pion leptonic decay branching ratio:
        $R_{e/\mu} = \mathcal{B}(\pi \to e\nu(\gamma)) / \mathcal{B}(\pi \to \mu\nu(\gamma))$. PIONEER aims to surpass the precision of the current best measurement, obtained by the PIENU experiment at TRIUMF, by more than an order of magnitude. This improvement would bring the experimental precision of $R_{e/\mu}$ to the $10^{-4}$ level, matching the accuracy of Standard Model calculations and providing a stringent test of lepton flavour universality.
        To achieve this ambitious goal, PIONEER will exploit the world’s most intense pion beam at PSI, an active silicon-strip target based on Low Gain Avalanche Detectors (LGADs), and an optimized calorimeter geometry. This talk will present an overview of the experiment and its physics program, including searches for sterile neutrinos and tests of Cabibbo–Kobayashi–Maskawa matrix unitarity. It will also highlight recent detector R\&D activities for PIONEER, including LGAD beam tests performed at TRIUMF.

        Speaker: Emma Klemets (UBC, TRIUMF)
      • 421
        The Water Cherenkov Test Experiment: Detector and Physics Lessons Towards Hyper-Kamiokande

        The Water Cherenkov Test Experiment (WCTE) is a 40-ton water Cherenkov detector operated in the T9 beamline of the East Area at CERN from October 2024 to June 2025. It is instrumented with 97 multi-PMT modules, each consisting of 19 3" PMTs. Charged particles in the beam are characterized by a series of trigger scintillators and aerogel Cherenkov threshold detectors on an event-by-event basis before entering WCTE, thus enabling detailed studies of how particles with known momentum, direction, and type are reconstructed in a water Cherenkov detector. In addition, a tagged photon beam is produced using a permanent magnet and hodoscope setup.

        As a technology prototype of the Intermediate Water Cherenkov Detector (IWCD) of the Hyper-Kamiokande (Hyper-K) experiment, WCTE has provided valuable experience in detector construction, commissioning, operation, and calibration.
        Physics data collected during the 2025 run in both pure water and gadolinium-loaded configurations will also offer useful physics input to current and future water Cherenkov experiments, such as improved understanding of neutrino multi-nucleon interaction and pion secondary interactions. In this talk, I will present the detector systems of WCTE and discuss the preliminary results and impact on the physics goals of Hyper-K.

        Speaker: Xiaoyue Li (TRIUMF)
      • 422
        Diffusion of radon daughters in polymer

        Radon and its decay products constitute significant sources of background in rare-event experiments, including dark matter and neutrino searches, due to their unavoidable origin from naturally occurring uranium. Both 222Rn and its long-lived progeny, 210Pb, can diffuse from detector material surfaces, resulting in persistent background contributions in low-background experiments. To study this effect, a dedicated system was developed comprising a radon source, a vacuum chamber with a strong electric field, and a thin film for collecting radon daughters, enabling the deposition of 210Pb onto a Nylon-6 thin film. This presentation will describe the hardware setup of the system and discuss results on the diffusion behavior of 210Pb and 210Po under different humidity conditions.

        Speaker: Mr Pushparaj Adhikari (Carleton University)
    • (DTP) R2-7 | (DPT)
      • 423
        Modelling Thermoelectric Figure of Merit for Quantum Materials Using the Lambert W Function

        Achieving high thermoelectric performance requires reconciling the competing dependences of the Seebeck coefficient, electrical conductivity, and thermal conductivity that together determine the figure of merit ZT. Our work develops a unified analytical framework that combines polylogarithmic representations of Fermi–Dirac statistics with multibranch formulations of the Lambert W function to model these interdependent transport coefficients more rigorously. By extending classical transport theory to include energy-dependent scattering and realistic quantum-statistical carrier distributions, we obtain closed-form expressions for electrical conductivity, Seebeck coefficient, Lorenz number, and electronic thermal conductivity. A key result is an analytic inversion of the reduced chemical potential ϕ, enabling direct extraction of carrier concentration and degeneracy regime from experimentally measurable quantities. This analytic structure avoids the iterative numerical inversion typically required in transport modelling, allowing transparent parameter dependence and clearer physical interpretation.
        Our framework facilitates extremum analyses for power factor and ZT, identifying the carrier densities, scattering parameters, and temperature windows that optimize thermoelectric performance across different material classes. The resulting equations highlight how quantum effects, band curvature, and scattering asymmetry shape the interplay between transport coefficients, and provide scalable criteria for materials design beyond numerical fitting alone. In particular, the derived extremum relations reveal universal behaviours that persist across semiconductor families, offering guidance for both bulk and nanostructured thermoelectrics.
        By integrating polylogarithmic methods with the multivalued structure of the Lambert W function, this work establishes a generalizable modelling approach for evaluating and optimizing thermoelectric behaviour in emerging quantum and low-dimensional materials. The resulting closed-form relations offer a pathway toward predictive transport modelling and inform strategies for enhancing sustainable energy harvesting and waste-heat recovery technologies.

        Speaker: Dr Farrukh Chishtie (Department of Occupational Science and Occupational Therapy)
      • 424
        Local pumping of competing order: Dynamical order within an expanding synchronization front

        Many ordered physical systems are proximate to a low-energy competing phase. This can manifest as a roton-like feature within a dispersing collective mode. A paradigmatic example is the attractive Hubbard model with its superfluid ground state. Close to half-filling, it hosts a roton-like excitation associated with a competing charge density wave order. Inspired by pump-probe experiments, we explore a local density modulation that ‘pumps’ the charge density wave order. Using a dynamical simulation scheme that includes correlations at all scales, we find an enhanced response when the modulation frequency matches the roton gap. Resonant pumping leads to two propagation fronts. The first corresponds to an incoherent disturbance propagating radially outwards. It moves at a constant speed set by the sound velocity associated with the Goldstone mode of the superfluid. The second is a synchronization front, which defines a zone with coherent, dynamical charge density wave order. This zone expands with a radius that grows as the square root of time, akin to diffusive motion. The diffusion coefficient is set by the curvature of the collective mode around the roton minimum. Our results can be directly tested in ultracold atomic gases. They can motivate studies of dynamical ordering in solids, e.g., by optical pumping using a small spot-size laser or a terahertz STM.

        Speaker: Ganesh Ramachandran
      • 425
        Newton Revisited: Modifications and Practical Insights into Classical Mechanics

        This paper challenges the conventional application of Newton's second law, demonstrating its significant limitations in real-world environments. Through experiments measuring friction and air resistance, and through analyses extending to celestial bodies like Pluto, we reveal major discrepancies between theoretical predictions and observed force dynamics. Our findings underscore the critical need for refined models that incorporate resistance effects, paving the way for more accurate force measurement and management in both terrestrial and extraterrestrial contexts.

        Speaker: Amritpal Singh Nafria (Lamrin Tech Skills University)
      • 426
        resonant-state expansion for non-relativistic wave equation in one dimension

        The resonant state expansion (RSE), a rigorous perturbation theory recently developed in electrodynamics, is here applied to non-relativistic wave equation in one dimension. The resonant state (RSs) of a symmetric double quantum well structure superimposed by a combination of delta functions was first calculated. These RSs are then taken as an unperturbed basis for the RSE. The resonant state expansion is first verified for triple quantum well systems, showing convergence to the available analytic solution as the number of basis resonant states increases. The method is then applied to more complicated systems such as multiple quantum well and barrier structures. Results are compared with the Eigen solution in triple quantum wells and infinite periodic potentials, revealing the nature of the resonant states in the studied systems.

        Speaker: Dr Abdullahi Tanimu (Department of Physics, Faculty of natural and applied science, Umaru Musa Yar'adua University Katsina, Nigeria)
    • (DPP-DASP) R2-8 | (DPP-DPAE)
      • 427
        Precision of Total Electron Content Estimated Using GPS Observables

        Total Electron Content (TEC) measurements derived from Global Positioning System (GPS) signals have become indispensable for application-driven ionospheric characterization, detection and characterization of “weak ionospheric signals” resulting from seismic and weather activities, underpinning the accuracy of satellite navigation, space weather forecasting, and ionospheric corrections in communication systems. Therefore, scrutiny of the precision of GPS-derived TEC is essential. Suppose all error sources are accurately identified and accounted for, code-based TEC determinations provide an absolute measure of TEC, albeit with modest precision, due to significant measurement noise and multipath errors where as carrier-phase observations are far more precise, but inherit an unknown integer ambiguity. The precision of GPS-derived TEC depends on careful mitigation of several error sources and two significant assumptions. A primary factor is the differential code bias (DCB) whose magnitude can reach several to tens of nanoseconds (equivalent to many TECU), making them a significant error source if uncalibrated. Modern TEC processing thus relies on robust bias calibrations, often using global networks and stable reference models, to remove these inter-frequency biases. Another major concern is multipath and measurement noise, which predominantly affect code measurements. Improved antenna design and receiver technologies have helped minimize the impact of multipath, but residual multipath can still limit precision in degraded signal environments.
        Most previous efforts on determining the precision of TEC measurements have focused on the hardware and DCB, with very little attention given to the assumptions on the ionospheric condition made in estimating the TEC. Two primary considerations were made in the estimation of TEC using GPS observables: a) the ionosphere is purely refractive, and b) higher-order effects. The effect of these two considerations was never scrutinized in the precision of the TEC estimated using GPS observables. In this study, we examine the effect of these two considerations on the relative TEC precision using high-data-rate (100 Hz) phase observables of L1, L2, and L5 frequencies. The ionospheric conditions under which the precision deteriorates will be discussed.

        Speaker: Matheus Maica (University of New Brunswick)
      • 428
        JWST/MIRI-MRS Integral-Field Spectroscopy of the Galactic Center and IRS 16 NW

        This study presents a spectroscopic analysis of the central few parsecs around Sgr A* using the Mid-Infrared Instrument in Medium-Resolution Spectroscopy (MIRI-MRS) Integral-Field Spectroscopy (IFS) mode on JWST. Data consist of calibrated three-dimensional spectral cubes spanning the full MRS coverage λ∈[4.9, 27.9] μm at resolving power R∈[1330, 3750]. From these cubes we derive region-integrated spectra that trace dynamics in the immediate environment of the Galactic Center (GC). The spatial sampling and spectral resolution provide the first systematic mid-IR integral-field view of these dynamics at the GC. As an initial application, a 13-spaxel aperture spectrum of the Ofpe/WN9 star IRS 16 NW is analyzed in Channel 1. Residual sub-band discontinuities are corrected by anchoring to the most photometrically stable sub-band and applying additive offsets at seams. Candidate emission and absorption features are isolated via sliding-median continuum subtraction and a prominence-based significance test, then identified by rest-wavelength matching within standard GC velocity bounds. Lines are refined through multi-component deblending (with Gaussians) and velocity-based association across lines. Unknown lines and potential implications are discussed.

        Speaker: Zachary Ireland (University of Toronto)
      • 429
        CHASM: A Bayesian Framework for Predicting Ionospheric Scintillation Over the Canadian Arctic

        Rapid fluctuations in GPS signal phase and amplitude—known as scintillation—degrade positioning accuracy in high-latitude regions. This work introduces the Canadian High Arctic Scintillation Model (CHASM), a Bayesian inference framework that predicts scintillation probability and intensity over the Canadian Arctic with high spatial and temporal accuracy under both quiet and disturbed geomagnetic conditions.
        Using GPS observations from the Canadian High Arctic Ionospheric Network (CHAIN) spanning Solar Cycles 24 and 25 (2008–2024), we processed raw data to remove multipath effects, radio-frequency interference, and instrumental noise. Scintillation occurrence rates were quantified using σφ and S4 indices as functions of magnetic latitude and local time, revealing distinct spatiotemporal patterns. These climatological insights were statistically linked to key solar-terrestrial drivers such as F10.7 solar flux, solar wind parameters, interplanetary magnetic field orientation, and ground-based geomagnetic field variations.
        The Bayesian approach first assesses the independence of predictor variables, then constructs prior probabilities for multinomial classification. Posterior probabilities are computed for arbitrary sets of predictors, enabling flexible forecasting. The model captures most variations seen in measured indices, whether associated with transient interplanetary events or background ionospheric conditions. Model accuracy is measured and demonstrated using cross-correlation analysis, yielding high-resolution forecasts of scintillation indices. This work advances the ability to forecast scintillation and mitigate its impact on navigation and timing systems in the Arctic region.

        Speaker: Melika Bazargani (University of New Brunswick)
      • 430
        Detection and Analysis of XRD Effects on PIXL XRF Spectra

        The Planetary Instrument for X-ray Lithochemistry (PIXL) is an instrument on the Mars 2020 rover. It provides high spatial resolution X-ray fluorescence (XRF) spectra of Martian samples. The instrument has a spot size of ~150um and is mounted on the rovers arm, allowing rastering over a ~cm^2 sample. Although it is not a dedicated X-ray diffraction (XRD) instrument, the dual detectors of the instrument, positioned at opposing angles to the target, can sometimes capture X-ray diffraction (XRD) signals. These artifacts can skew elemental abundances, particularly for transition metals such as Fe, Mn, Ca, and Ni, complicating mineralogical interpretations. This project aims to identify, analyze, and eventually model these XRD effects to extract the likely mineralogy of the samples. Detector comparisons are used to statistically flag anomalous signals beyond expected counting noise. Differences are modeled with a constant calibration factor plus Gaussian peak fitting to isolate diffraction contributions. The resulting PIXL data from Mars 2020 will be compared with chemistry and mineralogical results from the Alpha Particle X-ray Spectrometer (APXS) on the Mars Exploration Rover (MER) and Mars Science Laboratory (MSL) missions. By applying this approach across entire rasters, we aim to extract both robust XRF compositions and complementary diffraction information.

        Speaker: Annika Vetter
    • R-MEDAL3 Achievement Medalist Talk | Conférence des médaillés d'honneur
    • Recognitions Ceremony | Cérémonie de distinction
    • SNOLAB Townhall Meeting | Réunion publique SNOLAB Rm 1150 (cap.505) (Health Sciences Bldg., U.Sask.)

      Rm 1150 (cap.505)

      Health Sciences Bldg., U.Sask.

    • CINP Townhall Meeting | Réunion publique de l'ICPN Rm 214 (cap.60) (Arts Bldg., U.Sask.)

      Rm 214 (cap.60)

      Arts Bldg., U.Sask.

      • 12:10
        Lunch Break
      • 15:40
        Break
    • Close of Congress | Clôture du Congrès Rm 1150 (cap.505) (Health Sciences Bldg., U.Sask.)

      Rm 1150 (cap.505)

      Health Sciences Bldg., U.Sask.

      Conveners: Martin Williams (University of guelph), Wendy Taylor (York University (CA))
    • IPP AGM | AGA de l'IPP Rm 200 (cap.80) (Arts Bldg., U.Sask.)

      Rm 200 (cap.80)

      Arts Bldg., U.Sask.

      Join Zoom Meeting https://ualberta-ca.zoom.us/j/99365648971?pwd=TxW6ga3FkEoSvWnFBqXK1dqzZCHL8b.1

      Online link: Carsten Krauss (he/him) is inviting you to a scheduled Zoom meeting.

      Topic: IPP AGM Online Link (June 13 2025)
      Time: Jun 13, 2025 12:15 PM Saskatchewan

      Join Zoom Meeting
      https://ualberta-ca.zoom.us/j/99365648971?pwd=TxW6ga3FkEoSvWnFBqXK1dqzZCHL8b.1

      Meeting ID: 993 6564 8971
      Passcode: 956270
      One tap mobile
      +17806660144,,99365648971#,,,,956270# Canada
      +12042727920,,99365648971#,,,,
      956270# Canada

      Dial by your location
      +1 780 666 0144 Canada
      +1 204 272 7920 Canada
      +1 438 809 7799 Canada
      +1 587 328 1099 Canada
      +1 647 374 4685 Canada
      +1 647 558 0588 Canada
      +1 778 907 2071 Canada
      Meeting ID: 993 6564 8971
      Passcode: 956270
      Find your local number: https://ualberta-ca.zoom.us/u/advS82NEHO

      Convener: Carsten Krauss (University of Alberta)
      • 14:15
        Break - no refreshments served (Starbucks nearby)