- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
I will investigate the propagation of gravitational waves on curved spacetimes within the low energy effective field theory of gravity, where effects from heavy fields are captured by higher dimensional curvature operators. Depending on the spin of the particles integrated out, the speed of gravitational waves at low energy can be either superluminal or subluminal as compared to the causal structure observed by other species. I will clarify why a mild level of superluminality is not in contradiction with causality, analyticity or Lorentz invariance and show how consistent gravitational low energy effective theories can self-protect by ensuring that any time advance and superluminality calculated within the regime of validity of the effective theory is necessarily unresolvable for such theories. These considerations are particularly relevant for putting constraints on cosmological and gravitational effective field theories and I will provide explicit criteria to be satisfied so as to ensure causality.
As a modified gravity theory that introduces new gravitational degrees of freedom, the generalized SU(2) Proca theory (GSU2P for short) is the non-Abelian version of the well known generalized Proca theory where the action is invariant under global transformations of the SU(2) group. This theory was formulated for the first time in Phys. Rev. D 94 (2016) 084041, having implemented the required primary constraint to make the Lagrangian degenerate and remove one degree of freedom from the vector field in accordance with the irreducible representations of the Poincaré group. It was later shown in Phys. Rev. D 101 (2020) 045009 that a secondary constraint, which trivializes for the generalized Proca theory but not for the SU(2) version, was needed to close the constraint algebra. It is the purpose of this talk to implement this secondary constraint in GSU2P and to make more transparent the construction of the theory. Since several terms in the Lagrangian were dismissed in Phys. Rev. D 94 (2016) 084041 via their equivalence to other terms through total derivatives, not all of the latter satisfying the secondary constraint, the work was not so simple as directly applying the secondary constraint to the resultant Lagrangian pieces of the old theory. Thus, we were motivated to reconstruct the theory from scratch. In the process, we found the beyond GSU2P.
Horndeski gravity is the most general scalar tensor theory, with a single scalar field, leading to second-order field equations and after the GW170817 it has been severely constrained. In this talk, I will present an analog of Horndeski’s theory in the Teleparallel Gravity framework where gravity is mediated through torsion instead of curvature. It will be shown that, even though, many terms are the same as in the curvature case, we have much richer phenomenology in the teleparallel setting because of the nature of the torsion tensor. After this, I will show that by performing tensorial perturbations in this theory in a flat cosmological background, one is able to restore the severely constrained terms in standard Horndenski, creating an interesting way to revive Horndeski gravity. I will finalize my talk explaining about the PPN analysis of this model.
This work investigates a toy model for inflation in a class of modified theo-
ries of gravity in the metric formalism. Instead of the standard procedure —
assuming a non-linear Lagrangian f (R) in the Jordan frame — we start from
a simple φ 2 potential in the Einstein frame and investigate the correspond-
ing f (R) in the former picture. Such approach yields plenty of new pieces
of information, namely a self-terminating inflationary solution with a linear
Lagrangian, a robust criterion for stability of such theories, a dynamical effec-
tive potential for the Ricci scalar R, the addition of an ad-hoc Cosmological
Constant in the Einstein frame leads to a Thermodynamical interpretation
of this physical system, which allows further insight on its (meta)stability
and evolution.
Modern cosmology is extensively studied by the scientific community in an attempt to answer many issues about the universe. The theoretical basis underlying all this theory is general relativity. However, there are other approaches such as modified theories of gravity that try to solve a series of problems in cosmology, particularly in the study of the early universe, a period known as cosmic inflation. Recently a general theory has been proposed involving scalar and vector fields, opening up the possibility for new studies in cosmology and astrophysics. In this talk we present a model with scalar and vector fields with broken U(1) symmetry non-trivially coupled to gravity and apply it to the study of the early universe.
Scalar-tensor theories represent extensions of Einstein's gravity through the inclusion of a scalar field with no-minimal coupling. An analysis of these theories has been carried out within the framework of non-degenerate theories (determinant of the Hessian matrix is zero). Therefore, it is feasible to increase the space of modified theories including the cases in which there is degeneracy, thus circumventing the conditions of the instability theorem of Ostrogradsky. From a phenomenological perspective, the expansion of the scalar-tensor theory space opens the possibility of new and attractive explanations to different open questions in cosmology and astrophysics, e.g., the nature of dark energy, dark matter and inflation. The first step for the construction of these theories is to determine the degeneracy conditions for a Lagrangian dependent on a metric tensor $g_{\mu \nu}$ and a scalar field $\phi$. Subsequently, the most general action is constructed with quadratic terms in the second derivative of the scalar field. Hence the name of quadratic theories. In general, an action of this type reproduces field equations of order higher than two. However, when the degeneracy conditions are applied, the dangerous degrees of freedom are eliminated away. Thus, we want to find a classification of all degenerate scalar-tensor theories of higher order (in the field equations) quadratic (in the power of the field derivatives in the Lagrangian). These theories, being degenerate, are generally free from instabilities.
In this talk, I will discuss the dynamical properties of tracker quintessence models using a general parametrization of their corresponding potentials and show that there is a general condition for the appearance of a tracker behavior at early times. I will also discuss how to determine the conditions under which the quintessence tracker models can also provide an accelerating expansion of the universe with an equation of state closer to $-1$. Apart from the analysis of the background dynamics, the discussion will include linear density perturbations of the quintessence field in a consistent manner and using the same parameterization of the potential, and the influence they have on some cosmological observables. The generalized tracker models are compared to observations, and their appropriateness to ameliorate the fine-tuning of initial conditions and their consistency with the accelerated expansion of the Universe at late times will be discussed.
In this talk we discuss the relation between topological mass generation of 2-forms and generalized Galileon theories for 2-forms involving the systematic construction of quartic Lagrangians in four dimensions. In terms of massless 1− and 2-forms $A$ and $B$ respectively, the mechanism of topological generation of mass arises as a consequence of the topological interaction term $B\wedge F$, where $F={\rm d}A$ is the field strength of the 1−form. On the other hand, using the systematical Galileon construction it was shown that, apart from the quadratic and quartic Lagrangians, Galileon-like derivative self-interactions for the massive 2-form do not exist with the only exception of the quartic term $\epsilon^{\mu\nu\rho\sigma}\epsilon^{\alpha\beta\gamma}{}_{\sigma}\partial_{\mu}B_{\alpha\rho}\partial_{\nu}B_{\beta\gamma}$ which corresponds to a total derivative on its own but ceases to be so once an overall general function is introduced. Here we show that it exactly corresponds to the same interaction of topological mass generation. Based on the decoupling limit analysis of the interactions, we bring out supporting arguments for the uniqueness of such a topological mass term and absence of further Galileon-like interactions. Finally, we discuss some preliminary applications in cosmology, mostly related with non minimal coupling between gravity and $2-$forms.
In this work we study the possibility to obtain an accelerated expansion from arbitrary couplings between $p$-forms in a 4-dimensional space-time. The Lagrangian is built with couplings between 1- and 2-forms with kinetic functions of a scalar field $\phi$ (a quintessence field in this context). By using a dynamical system approach, we study the evolution of the fields in an anisotropic background, which is a natural framework to show if the interaction between $p$-forms can sustain a non-negligible shear. In addition, we found conditions for the cosmological viability of a dark energy dominated epoch. The evolution and stability is also confirmed by numerical integrations.
We study some models where non-Abelian gauge vector fields endowed with a SU(2) group representation are the unique source of inflation and dark energy. These models were first introduced under the name of gaugeflation and gaugessence, respectively. Although several realizations of these models have been discussed, not all available parameters and initial conditions are known. In this work, we use a dynamical system approach to find the full parameter space of the massive version of each model. In particular, we found that the inclusion of the mass term increases the length of the inflationary period. Additionally, the mass term implies new behaviours for the equation of state of dark energy allowing to distinguish this from other prototypical models of accelerated expansion. We show that an axially symmetric gauge field can support an anisotropic accelerated expansion within the observational bounds.
In the context of the dark energy scenario, the Einstein Yang-Mills Higgs model in the SO(3) representation was studied for the first time by M. Rinaldi (see JCAP 1510, 023 (2015)) in a homogeneous and isotropic spacetime. We revisit this model, finding in particular that the interaction between the Higgs field and the gauge fields generates contributions to the momentum density, anisotropic stress and pressures, thus making the model inconsistent with the assumed background. We instead consider a homogeneous but anisotropic Bianchi-I spacetime background in this paper and analyze the corresponding dynamical behaviour of the system. We find that the only attractor point corresponds to an isotropic accelerated expansion dominated by the Higgs potential. However, the model predicts non-negligible anisotropic shear contributions nowadays, i.e. the current universe can have hair although it will lose it in the future. We investigate the evolution of the equation of state for dark energy and highlight some possible consequences of its behaviour related to the process of large-scale structure formation. As a supplement, we propose the “Higgs triad” as a possibility to make the Einstein Yang-Mills Higgs model be consistent with a homogeneous and isotropic spacetime.
The Generalized Chaplygin Gas (GCG) model is characterized by the equation of state $P = -A \rho^{-\alpha}$, where $A>0$ and $\alpha < 1$. The model has been extensively studied due to its interesting properties and applicability in several contexts, from late-time acceleration to primordial inflation. Nonetheless we show that the inflationary slow-roll regime cannot be satisfied by the GCG model when General Relativity (GR) is considered. In particular, although the model has been applied to inflation with $0 < \alpha < 1$, we show that for $-1 < \alpha \le 1$ there is no expansion of the Universe but an accelerated contraction. For $\alpha \le -5/3$, the second slow-roll parameter $\eta_H$ is larger than unity, so there is no sustained period of inflation. Only for $\alpha$ very close to -1 the model produces enough $e$-folds, thus greatly reducing its parameter space. Moreover, using the Planck 2018 results, we constrain the parameters of the model to $1.391< A < 1.522$ and $-1.0131<\alpha<-1.0103$. Finally, we extend our analysis to the Generalized Chaplygin-Jacobi Gas (GCJG) model. We find that the introduction of a new parameter does not solve the previous problems or change the bounds on the parameters of the model. We conclude that the violation of the slow-roll conditions is a generic feature of the GCG and GCJG models during inflation when GR is considered.
In this work, we explore upcoming cosmological proxies to constrain alternative cosmological models. We focus on a particular dark energy model with a non-negligible contribution during radiation domination epoch, and therefore, it could have introduced additional degrees of freedom on the Hubble parameter at that time. We consider probes that these candidates can be submitted in the future, and calculate the upper limits for the observables associated with dark energy models.
The polarization pattern in the CMB fluctuations can leave an imprint of parity violation in the early universe through a positive measurement of cross-correlation functions that are not parity invariant. Does gravity violate parity? In this talk, I will show how the combination of the recent measurements from the Neutron Star Interior Composition Explorer (NICER) with the measurement of the tidal Love number with LIGO/Virgo observations, can place a constraint on gravitational parity violation. In particular, these constraints are specialized to dynamical Chern–Simons gravity, which is a well-motivated effective theory that introduces parity-violating interactions to the Einstein–Hilbert action. The consistency of these measurements with general relativity, places the most stringent constraint on gravitational parity violation to date, surpassing all other previously reported bounds by seven orders of magnitude.
The large scale structure bispectrum couples large scales (where general relativity is relevant) and short scales (where non-linearities are important) such that it contains information about the dynamics of the early universe in the squeezed limit. We start by performing a calculation which is non-linear and is based in General relativity under the weak field approximation with the purpose to solve the dynamics for dark matter fluid. The solutions for dark matter perturbations are used to compute relativistic corrections to correlation functions. Specifically, we compute the three-point matter correlation function (matter bispectrum) at one loop, we find its relativistic corrections to be as large as the Newtonian result in the squeeze limit. In this limit we also find the relativistic correction bispectrum to be degenerated with a primordial non-Gaussianity signal. We also compute matter power spectrum which has a small relativistic correction. Our next step is to connect dark matter perturbation with the number density of galaxies through a bias expansion. To begin with we write the Lagrangian bias expansion in terms of operators built on the curvature of early time hypersurface of comoving observer. We take this as initial condition and evolve it to obtain the Eulerian bias description in the general relativistic framework which is also renormalized properly. These are the first steps towards to determine the gauge invariant galaxy bispectrum.
We study a new class of vector dark energy models where multi-Proca fields $A_\mu^a$ are coupled to cold dark matter by a mass-type term. From here, we derive the general covariant form of the novel interaction term sourcing the field equations. This result is quite general in the sense that encompasses Abelian and non-Abelian vector fields. In particular, we investigate the effects of this type of coupling in a simple dark energy model based on three copies of canonical Maxwell fields to realize isotropic expansion. The cosmological background dynamics of the model is examined by means of a dynamical system analysis to determine the stability of the emergent cosmological solutions. As an interesting result, we find that the coupling function leads to the existence of a novel scaling solution during the dark matter dominance. Furthermore, the critical points show an early contribution of the vector field in the form of dark radiation and a stable de Sitter-type attractor at late times mimicking dark energy. The cosmological evolution of the system as well as the aforementioned features are verified by numerical computations. Observational constraints are also discussed to put the model in a more phenomenological context in the light of future observations.
In this paper, we study a triplet of inhomogeneous scalar fields, known as "solid", as a source of anisotropic dark energy. By using a dynamical system approach, we find that anisotropic accelerated solutions can be realized as attractor points for suitable parameters of the model. We compliment the dynamical analysis with a numerical solution whose initial conditions are set in the deep radiation epoch. The model predicts a spatial shear within the observational bounds nowadays, even when it is set to zero as an initial condition. The hairy attractors and an ultra slowly varying equation of state of dark energy close to -1 are key features of this scenario. We also analyzed the isotropic limit of the model, finding that the solid can be described by a constant equation of state and thus being able to simulate the behaviour of a cosmological constant.
An early period of inflation driven by a rolling scalar field must end by successfully reheating the Universe into the radiation-dominated era before the time of Big Bang Nucleosynthesis. In my talk I consider inflaton decays (both perturbative and resonant) into SM particles, which acquire their mass via couplings to the SM Higgs boson. The particular decays may temporarily be blocked due to the light spectator Higgs field obtaining a large effective VEV from quantum fluctuations during inflation. In the first part of the talk I will present a method of calculating the adiabatic density fluctuations that arise because the Universe exhibits a space-dependent reheat temperature,
due to the correspondingly space-dependent Higgs-induced particle masses. These fluctuations are severely non-Gaussian due to the inherent non-linearity of the reheating process. Results for the non-Gaussianity parameter f_NL are presented in the second part of my talk. Finally, I will show how temperature fluctuations and non-Gaussianity together can provide significantly strong constrains on SM parameters based on Cosmic Microwave Background measurements by Planck.
In the presence of magnetic fields, gravitational waves are converted into photons and vice versa. We demonstrate that this conversion leads to a distortion of the cosmic microwave background (CMB), which can serve as a detector for MHz to GHz gravitational wave sources active before reionization. The non-observation of such distortions gives rise to bounds exceeding laboratory constraints. Future advances in 21cm astronomy may conceivably push these bounds below the sensitivity of cosmological constraints on the total energy density of gravitational waves.
We perform a model independent study of freeze-in of massive particle dark matter (DM) by adopting an effective field theory framework. Considering the dark matter to be a gauge singlet Majorana fermion, odd under a stabilising symmetry $Z_2$ under which all standard model (SM) fields are even, we write down all possible DM-SM operators upto and including mass dimension eight. For simplicity of the numerical analysis we restrict ourselves only to the scalar operators in SM as well as in the dark sector. We calculate the DM abundance for each such dimension of operator considering both UV and IR freeze-in contributions which can arise before and after the electroweak symmetry breaking respectively. After constraining the cut-off scale and reheat temperature of the universe from the requirement of correct DM relic abundance, we also study the possibility of connecting the origin of neutrino mass to the same cut-off scale by virtue of lepton number violating Weinberg operators. We thus compare the bounds on such cut-off scale and corresponding reheat temperature required for UV freeze-in from the origin of light neutrino mass as well as from the requirement of correct DM relic abundance. We also briefly comment upon the possibilities of realising such DM-SM effective operators in a UV complete model.
In the Starobinsky model of inflation, the observed dark matter abundance can be produced from the direct decay of the inflaton field only in a very narrow spectrum of close-to-conformal scalar fields and spinors of mass $\sim 10^7$ GeV. This spectrum can be, however, significantly broadened in the presence of effective non-renormalizable interactions between the dark and the visible sectors. In particular, we show that UV freeze-in can efficiently generate the right dark matter abundance for a large range of masses spanning from the keV to the PeV scale and arbitrary spin, without significantly altering the heating dynamics. We also consider the contribution of effective interactions to the inflaton decay into dark matter.
I will discuss the fate of the U(1) gauge coupling under the inclusion of vector-like fermions in the Standard Model. Then, motivated by results on quantum gravity contributions to the running of gauge and Yukawa couplings, I will talk about the effect of simple but general corrections to the running of those couplings from the EW to large enough energy scales. One of our goals is to have an explanation for the pattern observed in the masses of the quark sector in the standard model, as well as the mixing angles.
We scrutinise the widely studied minimal scotogenic model of dark matter (DM)
and radiative neutrino mass from the requirement of a strong First order electroweak phase
transition (EWPT) and observable gravitational waves at future planned space-based experiments.
The scalar DM scenario is similar to inert scalar doublet extension of the standard model
where a strong ?first-order EWPT favours a portion of the low mass regime of DM which
is disfavoured by the latest direct detection bounds. In the fermion DM scenario, we get
newer region of parameter space which favours strong ?rst order EWPT as the restriction
on mass ordering within inert scalar doublet gets relaxed. While such leptophilic fermion
DM remains safe from stringent direct detection bounds, newly allowed low mass regime
of charged scalar can leave tantalising signatures at colliders and can also induce charged
lepton
flavour violation within reach of future experiments. While we get such new region of
parameter space satisfying DM relic, strong ?first-order EWPT with detectable gravitational
waves, light neutrino mass and other relevant constraints, we also improve upon previous
analysis in a similar model by incorporating appropriate resummation e?acts in e?ective ?nite
temperature potential.
We study the consequence of a non-standard cosmological epoch in the early
universe on the generation of baryon asymmetry through leptogenesis as well as dark matter abundance. We consider two different non-standard epochs: one where a scalar field behaving like pressure-less matter dominates the early universe, known as early matter domination (EMD) scenario while in the second scenario, the energy density of the universe is dominated by a component whose energy density red-shifts faster than radiation, known as fast expanding universe (FEU) scenario. While a radiation dominated universe is reproduced by the big bang nucleosynthesis (BBN) epoch in both the scenario, the high scale phenomena like generation of baryon asymmetry and dark matter relic get significantly affected. Adopting a minimal particle physics framework known as the scotogenic model which generates light neutrino masses at one-loop level, we find that in one specific realisation of EMD scenario, the scale of leptogenesis can be lower than that in a standard cosmological scenario. The other non-standard cosmological scenarios, on the other hand, can be constrained from the requirement of successful low scale leptogenesis and generating correct dark matter abundance simultaneously. Such a low scale scenario not only gives a unified picture of baryon asymmetry, dark matter and origin of neutrino mass but also opens up interesting possibilities for experimental detection.
We propose a baryogenesis mechanism where axion’s rotation in the potential is initiated by explicit Peccei-Quinn symmetry breaking in the early Universe and gives rise to the observed baryon asymmetry. With the aid of the neutrino Majorana mass term, the Peccei-Quinn charge associated with the rotation is sequentially transferred to the baryon asymmetry. QCD axion dark matter can be simultaneously produced by dynamics of the same PQ field via kinetic misalignment and/or parametric resonance.
We have considered the possibility of probing the left-right symmetric model (LRSM) via cosmic microwave background (CMB) by adopting the minimal LRSM with Higgs doublets, also known as the doublet left-right model (DLRM), where all fermions, including the neutrinos, acquire masses only via their couplings to the Higgs bidoublet. There exist additional relativistic degrees of freedom, because of the Dirac nature of light neutrinos, that can thermalize in the early universe through their gauge interactions corresponding to the right sector. We have constrained this model from Planck 2018 bound on the effective relativistic degrees of freedom and also estimate the prospects for planned CMB Stage IV experiments to constrain the model further. We find that $W_R$ boson mass below 4.06 TeV can be ruled out from Planck 2018 bound at $2\sigma$ CL in the exact left-right symmetric limit, which is equally competitive as the LHC bounds from dijet resonance searches. On the other hand, Planck 2018 bound at $1\sigma$ CL can rule out a much larger parameter space out of reach of present direct search experiments, even in the presence of additional relativistic degrees of freedom around the TeV corner. We also study the consequence of these constraints on dark matter in DLRM by considering a right-handed real fermion quintuplet to be the dominant dark matter component in the universe.
In this talk we will review the cosmological implications of a scalar field dark matter model when considering an Axion-like potential. We will analyze some cosmological observables such as the 3D and 1D matter power spectrum, as well as how some physical quantities such as the growth factor of the perturbations and its velocity depend explicitly on the wavenumber in this type of models. We will show the prediction of our model in regards to the halo mass function for small-scale structures, and we will end with the results of the statistical analysis that we carried out using data from both the Cosmic Microwave Background radiation and Lyman-alpha forest in order to constraint the parameters of our model.
Topological defects–such as domain walls, cosmic strings, and magnetic monopoles–are expected to appear during a phase transition in Grand Unified Theories of particle physics. Such monopoles were estimated to be the dominant matter during the early universe. However, this prediction is in tension with the fact that monopoles have not been observed. The former is known as the cosmological monopole problem. Different solutions to this problem were proposed, being inflation the most well known. In 1997, Dvali, Liu, and Vachaspati proposed an alternative solution. The main idea is that domain walls-that are generated during the same phase transition as the monopoles-swept away the monopoles before decaying. This solution proposes that defect interactions lead to a defect erasure mechanism.
In the present work, we bear out the Dvali, Liu, and Vachaspati (DVL) mechanism in the (2+1) dimensional Abelian-Higgs model with sextic potential. We study the unwinding process of a vortex line during the collision with a non-topological domain wall containing a core with a Coulomb-like phase, inside which the whole symmetry group is restored. We simulate the collision between a vortex and a domain wall for different regimes of the model's parameters. Within this approach, it is found that none of the vortices crosses the Coulomb vacuum layer. We observe how the collision leads to the unwinding of the vortex and the unconfinement of the magnetic flux, dissipating in the domain wall's core. According to these results, we suggest the independence of the DVL mechanism from the parameters' values and how it may be generalized.
We study the gravitational production of super-Hubble-mass dark matter in the very early universe. We first review the simplest scenario where dark matter is produced mainly during slow roll inflation. Then we move on to consider the cases where dark matter is produced during the transition period between inflation and the subsequent cosmological evolution. The limits of smooth and sudden transitions are studied, respectively.
We also discuss two possible scenarios, namely the curvaton mechanism and the dark matter density modulation, where non-Gaussianity signals of superheavy dark matter produced by gravity can be enhanced and observed. In both scenarios, superheavy dark matter couples to an additional light field as a mediator. In the case of derivative coupling, the resulting non-Gaussianities induced by the light field can be large, which can provide inflationary evidences for these superheavy dark matter scenarios.
Within the limits of the present cosmological observations, an interacting model of holographic dark energy and matter in a five-dimensional spherically symmetric space-time setting has been analyzed within the framework of Brans-Dicke Theory. We obtain a model universe that undergoes super-exponential expansion. It is predicted that the universe is isotropic and will be continuously dark energy dominated. The universe doesn't evolve from an initial singularity but, ultimately ends at the big crunch singularity. The values of the Hubble’s parameter, dark energy and matter density parameters are obtained as $H=68.027$, $\Omega_{de} =0.741$ and $\Omega_m = 0.203$ respectively which are very close to the values estimated by the latest Planck 2018 result.
The Cosmic Microwave Background (CMB) is an open window to the early Uni-
verse. To compute the CMB Spectrum we need to perturb the FLRW universe since
our universe is no longer homogeneous and isotropic at small scales. Furthermore,
the interaction between photons and electrons induces a perturbation in the photons
temperature. This interaction can be described by the Boltzmann equations. Solving
these equations we can find the CMB Temperature Power Spectrum. On the other
hand, observations from both the sun and our atmosphere strongly suggest that
neutrinos have mass. In this work, we will show how we can compute the quantities
that describe the current percent of matter density, energy density and shape of the
universe for ΛCDM considering massive neutrinos in comparison with the standard
model ΛCDM .
Uno de los grandes retos de la cosmología es la descripción de la distribución de la materia a gran escala. Para esto, presentamos los resultados de nuestra investigación,la cual fue recientemente aceptada para publicación en el MNRAS Letters (2008.08164). En el artículo proponemos estudiar la red cósmica como un grafo ($\beta$-Skeleton), dada su estructura filamentar. Esto nos permite utilizar herramientas de teoría de la información, específicamente la entropía, en simulaciones cosmológicas para definir el impacto de factores como la varianza cósmica, la evolución de redshift, el RSD, los parámetros cosmológicos, entre otros.
Finalmente, mencionamos cómo este método puede utilizarse en múltiples aplicaciones en cosmología, así como pude aplicarse a datos observacionales de proyectos como el Dark Energy Spectroscopic Instrument.
Galaxy superclusters are starting to be routinely detected in observational data of the large scale structure of the Universe.
Diverse definitions and algorithms have been presented in the literature with the expectation to build a compelling framework to study superclusters.
In this work we present the strengths of defining superclusters as watershed basins in the divergence velocity field.
We apply this definition on diverse datasets generated from linear theory and N-body simulations, with different grid sizes, smoothing scales and types of tracers.
From this framework emerges a linear scaling relation between the average supercluster size and the autocorrelation length in the divergence field, a result that holds for one order of magnitude from 10 Mpc $h^{-1}$ up to 100 Mpc $h^{-1}$.
Our results suggest that the divergence-based definition provides a robust context to quantitatively compare results across different observational or computational frameworks.
It can also facilitate the exploration of how supercluster properties depend on cosmological parameters to quantify the possibility of using superclusters as cosmological probes.
The constant improvement of astronomical observation techniques opens up new perspectives to study various gravitational phenomena of interest at all scales. This fact, in turn, suggests testing modified theories of gravity in physical scenarios which allows us to constrain their free parameters by comparison with the observations, and that at the same time guides us to discern features that make them distinguishable from General Relativity (GR). Therefore, a study on internal solutions of relativistic stars is presented in the context of a vector-tensor theory named the generalized Proca theory. The stars correspond to spherically symmetric and static compact objects constituted by a perfect fluid governed by a polytropic equation of state (EOS). Starting from physical assumptions, analytical restrictions are found on the free parameters of the theory. Numerical solutions reveal the presence of deviations in the star's internal structure with respect to the GR predictions for the same initial conditions and EOS. Additionally, we highlight the importance of the vector field profile and the sign of the coupling chosen on important results such as mass and radius. This phenomenon is attributed to the presence of a pressure due to the vector field that modifies the evolution of the star compared to the case of GR. Hence, these results make these objects interesting targets for present and future astronomical observations.
The presence of anisotropy in the early stages of the universe is a natural phenomenon to be investigated once the universe at that stage may have presented
different properties. In this project, we considered a homogeneous and anisotropic
cosmological model, Kantowski-Sachs cosmological model, with two scale factors,
a(t) and b(t). The matter content of the model consists of Chaplygin Gas, whose
equation of state is p =-\rho/A where p and ? are the pressure and the density of the fluid, respectively, and A a positive constant. We found the Einstein's equations for this case and solved the system of equations numerically. We varied the initial conditions and possible values for the constants in order to see if the universe would become isotropic, as we know it today. To measure the amount of anisotropy, we made use of the parameter of anisotropy and studied the fate of the universe.
The introduction of right-handed chirality partners for neutrinos allows the calculation of deviations in the effective number of degrees of freedom in the early universe, which could be probed by new observations like CMB-SO4 or Planck+BAO. The presence of these sterile neutrinos can be useful when proposing dark matter candidates and when different physical phenomena such as the matter-antimatter asymmetry are discussed. Moreover, the introduction of a new neutral interaction could explain the mechanism of thermalization of these new light-particles, while inducing a chiral anomaly free theory. We motivate a leptophilic to the $\mu$ and $\tau$ flavors Z' model with the introduction of an even number of right-handed chirality partners; providing a useful and compatible model both in particle physics and cosmology. We study the perturbative effects of these new particles in the Cosmic Microwave Background spectrum, showing that deviations from the $\Lambda{CDM}$ model are most important in the small-angle region of the temperature multipolar expansion. The effects of this model in the Hubble parameter are discussed, concluding that a light improvement in the Hubble tension is obtained.
Los cosmólogos han propuesto que una sustancia misteriosa llamada quintaesencia, o energía oscura, puede explicar por qué nuestro universo se está acelerando. ¿Pero de qué está hecha?, ¿Qué produce la expansión acelerada del universo?,¿Cuál fue la causa de la hipotética era inflacionaría?, en fin son muchas las preguntas acerca de la evolución y desarrollo del universo.Hoy día, se está produciendo una revolución en cosmología. Son muchos y variados los nuevos modelos físicos que abordan la estructura y destino del universo, pasando desde campos exóticos hasta teorías modificadas de la gravedad. En los últimos años numerosas observaciones y modelos teóricos han remodelado el campo de la cosmología, muchos cosmólogos están explorando la posibilidad de que gran parte de la energía del universo, llamada energía oscura, tenga un origen de tipo geométrico, el cual se puede explorar con el advenimiento de las llamadas teórias modificadas de la gravedad.
Para este trabajo se propone, desarrollar un esquema teórico, llamado campos trackers, el cual permita constrastar con las observaciones de SN Ia, el cual permita reconstruir la forma de los campos y potenciales cuyas densidades de energía sigan a las densidades de energía de radiación, matería y energía oscura (constante cosmológica), abordando esto desde la solución de sistemas dinámicos cosmológicos, en el marco de teórias modificadas de la gravedad.
Los vacíos cósmicos, regiones de decenas de Megaparsec con una densidad de materia menor a la media cosmológica, son un laboratorio de estudio para la cosmología. La morfología y evolución de estos vacíos se ve influenciada por el contenido de materia-energía del Universo.
En la última década, estudios analíticos mostraron que sería posible utilizar la caracterización geométrica de los vacíos como una prueba para determinar parámetros cosmológicos.
En este trabajo usamos simulaciones de la distribución de materia oscura a grandes escalas para explorar esta hipótesis. Las simulaciones son realizadas con diferentes parámetros cosmológicos, lo que nos permite cuantificar la influencia de la constante cosmológica en las formas de los vacíos encontrados. Luego de presentar estos resultados discutiremos la posibilidad de detección de estos efectos en surveys de galaxias.
La red cósmica es el patrón que surge al observar la distribución de materia en escalas de decenas de Megaparsec.
Esta estructura se revela a través de observaciones de la distribución de galaxias.
El estudio estadístico de estas distribuciones se hace en su mayoría a través de la función de correlación de dos puntos.
Otra opción es el estudio a través de las propiedades de conectividad de los grafos que se pueden construir sobre estas galaxias.
En esta charla presentaremos una manera de cuantificar la estructura del grafo a través de la definición de complejidad estadística construida a partir de la entropía de Shannon y la divergencia de Jensen-Shannon.
Usamos simulaciones cosmológicas para medir los valores esperados de complejidad en el Universo y encontramos resultados del orden de $10^{-1}$ bits$^2$.
Igualmente, estudiamos la influencia sobre la complejidad de seis factores: varianza cósmica, geometría, distorsiones espaciales de redshift (RSD), evolución de redshift, parámetros cosmológicos y densidad de número.
Finalmente mostraremos resultados de la complejidad sobre catálogos observacionales del Baryon Oscillation Spectroscopic Survey (BOSS) para comentar sobre la relevancia de esta medida de complejidad en estudios de formación de galaxias y cosmología.
In this talk I present the study of Higgs boson observables at the LHC and their impact on electroweak baryogenesis in the context of Standard Model effective field theory with the inclusion of dimension 6 operators of Higgs and fermion fields. I will also discuss how these new terms can generate an electric dipole moment of leptons and thus add further constraints to produce the baryon asymmetry. I present the main results when considering a single fermion flavor term or for combinations of two flavors. For each case, the results of the identification of which observables constrain more severely the new terms and the interplay of the complementary constraints to identify viable regions of parameter space is also presented.
In recent work (Yang et al., 2007.03150) we showed some examples on how to use the mark weigh-
ted correlation functions (MCFs), to study the large scale structure of the Universe. In this talk I will
summarize that work and show how MCFs exhibit distinctive peaks and valleys that do not exist in the standard correlation functions. Although, MCFs are not suited to be used as “standard rulers” to probe the cosmic expansion history, they can be used to study structure formation parameters, such as σ8 and the galaxy bias. Finally, I will show how we used MCFs to improve cosmological constraints on Ω m and w by 30 % and 50 %, respectively.
The spatial distribution of galaxies on large scales forms a striking filamentary pattern known as the cosmic web. Measuring and characterizing this pattern is one of the main goals in cosmology. There are algorithms that can perform this task using the full dark matter distribution as an input. However, in observations, the dark matter distribution is not observable. To bypass this limitation there are other types of algorithms that build a graph on top of an observed 3-dimensional galaxy distribution to roughly quantify the cosmic web patterns.
In this talk, I will show a Machine Learning-based approach that can link these two types of algorithms and help us to infer the dark matter cosmic web from observed galaxies. I use state-of-the-art cosmological simulations from the Illustris-TNG project as a training data-set.
I will present results for our cosmic web reconstruction methods and comment on its possible application on observational data from the Dark Energy Spectroscopic Instrument (DESI).
Baryon Acoustic Oscillations provide a standard ruler for measuring distances far back into the history of the Universe. The Galaxy-Galaxy two-point correlation function (2PCF) is the standard proxy for the measurement of the BAO scale using large-scale spectroscopic surveys. There is, however, information beyond the two-point statistics encoded in the galaxy distribution that can be extracted in order to improve the BAO scale measurement. One approach is to make use of the distribution of Cosmic Voids, broadly defined as minima in the matter density field. Previous studies shown that it does yield an improvement for BOSS low-z data (Zhao et al. 2020). Our aim is to improve upon this method by defining the Cosmic Void clustering sample in a galaxy density-independent way, which is useful when analyzing matter tracers with different observed densities.
We define a generalized void radius cut based on the local matter tracer density that maximizes the BAO peak signal-to-noise ratio by construction and minimizes systematical effects due to galaxy incompleteness. We apply this cut to simulations and BOSS DR12 data. However, BAO fitting results seem roughly equivalent to those using the constant (galaxy density-dependent) radius cut on the BOSS DR12 data. This approach can, however, be extended to other matter tracers (ELGs, QSOs) used in current redshift surveys such as the recent eBOSS data release or the up-coming DESI survey.
The bias ($b$) is a parameter that relates the clustering of a set of objects with the clustering of underlying dark Matter. It has been found that b depends on various galaxy properties, this phenomenon is usually known as Galaxy Assembly Bias. In general, b depends on how the galaxies have formed. We quantify the galaxy assembly bias of simulated galaxies at $z = 0$ with stellar masses $ M_\star >10^9M_\odot h^{-1}$ from the cosmological magnetohydrodynamic simulation IllustrisTNG. We obtain the clustering-age relation, cuts in the (g-i) colors, and the specific Star Formation Rate of central and satellite galaxies separately.
La función de correlación de dos puntos es una herramienta estadística que cuantifica la distribución espacial de galaxias. A partir de esta función es posible acotar los parámetros cosmológicos. El Instrumento Espectroscópico de Energía Oscura (DESI) es un experimento que desde el 2021 hasta el 2025 hará un levantamiento de millones de galaxias para medir esta función de correlación y acotar la historia de expansión del Universo. El instrumento consiste de 5000 posicionadores robóticos que guían las fibras ópticas que recogen la luz de las galaxias observadas. Estas características mecánicas pueden introducir sesgos en la medición de la función de correlación a dos puntos. En esta charla presentaremos la estrategia de pesado por Probabilidad Inversa de Pares (PIP, por su acrónimo en inglés de Pair Inverse Probability) para corregir estos sesgos. Nuestros resultados se basan en simulaciones detalladas de los resultados esperados de DESI en sus primeros meses de mediciones.
The Mapper of the IGM Spin Temperature (MIST) instrument is a planned radio telescope that will be installed in Chile next year. MIST consists of one dipole antenna that will explore the Early Universe in the 50–120 MHz frequency band. Previous observations done by the EDGES collaboration have sparked claims of a positive detection of the 21-cm absorption feature of primordial hydrogen at z=17 in the Epoch of Reionization. MIST seeks to improve upon the EDGES instrument in terms of removing undesired antenna gain chromaticity and elevation angle dependence. In this talk we will show early results of antenna parameter optimization which show that the claimed EDGES detection can be confirmed or ruled out when MIST is deployed.
In this talk we introduce a pseudo-spectral approach based on the spin-weighted spherical harmonics for numerically treating the evolution of inhomogeneous and anisotropic cosmological models with spatial topology S1XS2.
This research analyzes and describes the movement of extended N-bodies endowed with an internal structure which interact gravitationally in an isolated and self-gravitating system. For it is made use of the Newtonian and Einsteinian theories, understanding Newton’s theory as a limit of the general relativity theory.
For this we make a mathematical approach to the equations that describe the movement in each case. For the Newtonian case we approach the problem from two points of view, the first we study the external problem, where is determined the movement of the centers of mass of each body in relation to the global center of mass, in the second one we study the internal problem, in which is determined
the movement of the body in relation to its own center of mass. In the relativistic case, we make use of the formalism Damour, Soffel and Xu (DSX). This consists of taking N+1 charts on the manifold, first N local charts, which describe the problem for each body in a local way, this is called Geocentric System and a global chart that describes the dynamics of the system with a whole, which is called Barycentric System, and finally the relationship between the local charts and the global chart is established.
This study allows us to identify the relationship that exists between these two theories, as well as
to determine the elements that cannot be extended from one theory to another. The importance of
this research lies not only in the establishment of a relationship between Newtonian gravitation and
general relativity, but also in its applications in the design of space missions, effects in cosmological
spacetimes as well as in technological development.
In recent years tests on General Relativity have been made and numerical simulations played an important role, in particular Numerical Relativity (NR) simulations have been used in the understanding of astrophysics phenomena. Recent works have shown the importance of NR in the case of cosmology using cosmological perturbations for a flat expanding universe with a perfect fluid background without the presence of magnetic fields. The main interest of this work is to have an insight of cosmic magnetic fields under the cosmic dynamo equation. To achieve this, cosmological perturbations at first order are treated under a spatially flat Friedman-Lemaıitre-Robertson-Walker (FLRW) metric in order to obtain the cosmic dynamo equation. Finally, NR is used to write the perturbed Einstein field equations in the 3+1 formalism to be able to obtain the evolution equations for the magnetic field.
Two methods for mass profiles reconstruction in disc-like galaxies are presented in this work, the first is done with the fit of the rotation curve based on the data of circular velocity which are obtained observationally in a stars system, while the other method is focused in the Gravitational Lensed Effect (GLE). For these mass reconstructions, two routines developed in the language of programming python were used: one of them is Galrotpy, which was built by members of the Galaxies, Gravitation and Cosmology group from the Observatorio Astronómico Nacional of the Universidad Nacional de Colombia and whose funtionality is applied in the rotation curves, the other routine is Gallenspy which was created in the development of this work and it is focused in the GLE. It should be noted that both routines perform a parametric estimation from the Bayesian statistics, which allows obtaining the uncertainties of the estimated values. Finally is shown the great power of combining galactic dynamics and GLE, for this purpose the mass profiles of the galaxies SDSSJ2141-001 and SDSSJ1331+3628 were reconstructed with Galrotpy and Gallenspy where these results obtained are compared with those reported by other authors regarding these systems.
Provide techniques and models that lead describe large structures formation in the universe, is a great challenge in cosmology. In this work, matter spectrum potential at second order was reconstructed using semi-analytical methods, in this context general equations of movement for dark matter fluid are constructed. Due to the complexity in the solution of this kind of equations, a solution is proposed, first using a lineal regimen approach and then a Fourier
s space representation is made getting solutions at second order, with elements of Quantum Field Theory corrections for a loop are obtained. Finally, equations of motion are proposed for mixed fluid of Baryonic and dark matter.
The observation of GW170817 binary neutron star (BNS) merger event has imposed strong bounds on the speed of gravitational waves (GWs) locally, inferring that the speed of GWs propagation is equal to the speed of light. Current GW detectors in operation will not be able to observe BNS merger to long cosmological distance, where possible cosmological corrections on the cosmic expansion history are expected to play an important role, specially for investigating possible deviations from general relativity. Future GW detectors designer projects will be able to detect many coalescences of BNS at high $z$, such as the third generation of the ground GW detector called Einstein Telescope (ET) and the space-based detector deci-hertz interferometer gravitational wave observatory (DECIGO). In this paper, we relax the condition $c_T/c = 1$ to investigate modified GW propagation where the speed of GWs propagation is not necessarily equal to the speed of light. Also, we consider the possibility for the running of the Planck mass corrections on modified GW propagation. We parametrize both corrections in terms of an effective GW luminosity distance and we perform a forecast analysis using standard siren events from BNS mergers, within the sensitivity predicted for the ET and DECIGO. We find at high $z$ very strong forecast bounds on the running of the Planck mass, namely $\mathcal{O}(10^{-1})$ and $\mathcal{O}(10^{-2})$ from ET and DECIGO, respectively. Possible anomalies on GW propagation are bound to $|c_T/c - 1| \leq 10^{-2} \,\,\, (10^{-2})$ from ET (DECIGO), respectively. We finally discuss the consequences of our results on modified gravity phenomenology.
Nombre del taller: Interacciones Fundamentales
Creado por: Suratómica https://www.suratomica.com/
Tiempo necesario: 2 horas
Cantidad de participantes: 15 con registro previo
Texto curatorial del taller: El artista observa, estudia, registra, analiza, concluye y crea un
objeto sensorial en el cual se refleja el método practicado desde el origen de la idea en el
pensamiento hasta su consecuencia en el espacio. El método científico aplicado al arte no
es muy diferente al método científico aplicado a la ciencia. Se trata, en ambos casos, de un
proceso de creación basado en una pregunta, curiosidad u observación, seguido por una
hipótesis, luego teoría, y por último ley u obra. En ambos casos es una serie de
procedimientos que traen al mundo nuevos conocimientos.
Descripción del taller: Este taller busca la manera de acercarse a la ciencia desde el arte y
viceversa, a través de una serie de pasos que acercan y casi igualan el método científico con el
proceso de creación artístico. De esta manera, se eliminan limitaciones de y entre las dos
disciplinas y se aprende a tomar ventaja de los puntos de unión de las mismas. ¿Cuál es la
diferencia en la creación de conocimiento científico y la creación de obras artísticas?
¿Cuáles son los diferentes pasos del pensamiento artístico y cuáles son los del científico? ¿Quédiferencia hay entre una hipótesis, un boceto, una experimentación y un análisis de materiales
en ambas disciplinas? ¿Qué diferencia hay entre el producto artístico y el producto científico?
¿Pueden ser uno mismo?
Metodología:
1. Introducción: Se describe el taller y sus objetivos
2. Se presentan ejemplos de creación en la ciencia y en el arte
3. Se introduce el ejercicio del taller
4. Ejercicio colectivo
5. Muestra de los resultados
6. Conclusión del taller