1 Introduction and Preliminaries

1.1 The “Quantum-Gravity problem” as seen by a phenomenologist

Our present description of the fundamental laws of Nature is based on two disconnected pieces: “quantum mechanics” and “general relativity”. On the quantum-mechanics side our most significant successes were obtained applying relativistic quantum field theory, which turns out to be the appropriate formalization of (special-) relativistic quantum mechanics. This theory neglects gravitational effects and is formulated in a flat/Minkowskian spacetime background. Interesting results (but, so far, with little experimental support) can be obtained by reformulating this theory in certain curved spacetime backgrounds, but there is no rigorous generalization allowing for the dynamics of gravitational fields. The only known way for having a manageable formulation of some gravitational effects within quantum field theory is to adopt the perspective of effective field theory [144, 276], which allows Lagrangians that are not renormalizable. At leading order, this effective theory just gives us back Einstein’s general relativity (GR), but beyond leading order it predicts corrections proportional to powers of \({E^2}/E_p^2\), where E is the characteristic energy scale of the process under consideration (typically, the center-of-mass energy for a scattering experiment) and E p is the Planck scale (E p ∼ 1028 eV). The effective-field-theory description evidently breaks down at energies E on the order of the Planck scale, leaving unanswered [144, 276] most of the core issues concerning the interplay between gravity and quantum mechanics. Most importantly, the experiments that have formed our trust in quantum mechanics are nearly exclusively experiments in which gravitational effects are negligible at the presently-achievable levels of experimental sensitivity (some of the rare instances where the outcome of a quantum-mechanical measurement is affected by gravitational effects, such as the one reported in Ref. [428], will be discussed later in this review).

On the gravity side our present description is based on GR. This is a classical-mechanics theory that neglects all quantum properties of particles. Our trust in GR has emerged in experimental studies and observations in which gravitational interactions cannot be neglected, such as the motion of planets around the Sun. Planets are “composed” of a huge number of fundamental particles, and the additive nature of energy (playing in such contexts roughly the role of “gravitational charge”) is such that the energy of a planet is very large, in spite of the fact that each composing fundamental particle carries only a small amount of energy. As a result, for planets gravitational interactions dominate over other interactions. Moreover, a planet satisfies the conditions under which quantum theory is in the classical limit: in the description of the orbits of the planets the quantum properties of the composing particles can be safely neglected.

GR and relativistic quantum mechanics do have some “shared tools”, such as the notion of spacetime, but they handle these entities in profoundly different manners. The differences are indeed so profound that it might be natural to expect only one or the other language to be successful, but instead they have both been extremely successful. This is possible because of the type of experiments in which they have been tested so far, with two sharply separated classes of experiments, allowing complementary approximations.

While somewhat puzzling from a philosopher’s perspective, all this would not on its own amount to a scientific problem. In the experiments we are presently able to perform and at the level of sensitivities we are presently able to achieve there is no problem. But a scientific problem, which may well deserve to be called a “quantum-gravity problem”, is found if we consider, for example, the structure of the scattering experiments done in particle-physics laboratories. There are no surprises in the analysis of processes with an “in” state with two particles each with an energy of 1012 eV. Relativistic quantum mechanics makes definite predictions for the (distributions/probabilities of) results of this type of measurement procedure, and our experiments fully confirm the validity of these predictions. We are presently unable to redo the same experiments having as “in state” two particles with energy of 1030 eV (i.e., energy higher than the Planck scale), but, nonetheless, if one factors out gravity, relativistic quantum mechanics makes a definite prediction for these conceivable (but presently undoable) experiments. However, for collisions of particles of 1030 eV energy, the gravitational interactions predicted by GR are very strong and gravity should not be negligible. On the other hand, the quantum properties predicted for the particles by relativistic quantum mechanics (for example the fuzziness of their trajectories) cannot be neglected, contrary to the “desires” of the classical mechanics of our present description of gravity. One could naively attempt to apply both theories simultaneously, but it is well established that such attempts do not produce anything meaningful (for example by encountering uncontrollable divergences). As mentioned above, a framework where these issues can be raised in very precise manner is the one of effective quantum field theory, and the break down of the effective quantum field theory of gravitation at the Planck scale signals the challenges that are here concerning me.

This “trans-Planckian collisions” picture is one (not necessarily the best, but a sufficiently clear) way to introduce a quantum-gravity problem. But is the conceivable measurement procedure I just discussed truly sufficient to introduce a scientific problem? One ingredient appears to be missing: the measurement procedure is conceivable but presently we are unable to perform it. Moreover, one could argue that mankind might never be able to perform the measurement procedure I just discussed. There appears to be no need to elaborate predictions for the outcomes of that measurement procedure. However, it is easy to see that the measurement procedure I just discussed contains the elements of a true scientific problem. One relevant point can be made considering the experimental/observational evidence we are gathering about the “early Universe”. This evidence strongly supports the idea that in the early Universe particles with energies comparable to the Planck energy scale E p were abundant, and that these particles played a key role in those early stages of evolution of the Universe. This does not provide us with opportunities for “good experiments” (controlled repeatable experiments), but it does represent a context in which proposals for the quantum-gravity/Planck-scale realm could be tested. Different scenarios for the physical theory that applies in the quantum-gravity realm could be compared on the basis of their description of the early Universe. The detailed analysis of a given physical theory for the quantum-gravity realm could allow us to establish some characteristic predictions for the early Universe and for some manifestations in our observations (cosmology) of those early stages of evolution of the Universe. The theory would be testable on the basis of those predictions for our present observations. Therefore, these early-Universe considerations provide an opportunity for comparison between the predictions of a quantum-gravity theory and measurement results. And it might not be necessary to resort to cosmology: the fact that (in setting up the quantum-gravity problem) we have established some objective limitations of our present theories implies that some qualitatively new effects will be predicted by the theory that applies to the quantum-gravity realm. These effects should dominate in that realm (in particular, they will profoundly affect the results of measurements done on particles with Planck-scale energy), but they should always be present. For processes involving particles with energy E much smaller than E p the implications of a typical quantum-gravity theory will be rather marginal but not altogether absent. The magnitude of the associated effects should be suppressed by some small overall coefficients, probably given by powers of the ratio E/E p ; small but different from zero.

Therefore, we do have a genuine “quantum-gravity problem”, and this problem has been studied for more than 70 years [508]. Unfortunately, most of this research has been conducted assuming that no guidance could be obtained from experiments. But, if there is to be a “science” of the quantum-gravity problem, this problem must be treated just like any other scientific problem, seeking desperately the guidance of experimental facts, and letting those facts take the lead in the development of new concepts. Clearly, physicists must hope this also works for the quantum-gravity problem, or else abandon it to the appetites of philosophers.

It is unfortunately true that there is a certain level of risk that experiments might never give us any clear lead toward quantum gravity, especially if we are correct in expecting that the magnitude of the characteristic effects of the new theory should be set by the tiny Planck length ( p 1/Ep ∼ 10−35 m, the inverse of the huge Planck scale in natural units). But even if the new effects were really so small we could still try to uncover experimentally some manifestations of quantum gravity. This is hard, and there is no guarantee of success, but we must try. As I shall stress again in later parts of this first section, let me note here that some degree of optimism could be inspired by considering, for example, the prediction of proton decay within certain modern grand unified theories of particle physics. The decay probability for a proton in those theories is really very small, suppressed by the fourth power of the ratio between the mass of the proton and the grand unification scale (a scale that is only some three orders of magnitude smaller than the Planck scale), but meaningful tests of scenarios for proton decay in grand unified theories have been devised.

While the possibility of a “quantum gravity phenomenology” [52] could be considered, on the basis of these arguments, even in the early days of quantum-gravity research, a sizable effort has finally matured only since the second half of the 1990s. In particular, only over this recent period do we have the first cases of phenomenological programs that truly affect the directions taken by more formal work in quantum gravity. And, especially in relation to this healthy two-way cross-influence between formal theory and phenomenology, a prominent role has been played by proposals testing features that could be manifestations of spacetime quantization. The expectation that the fundamental description of spacetime should not be given by a classical geometry is shared by a large majority of quantum-gravity researchers. And, as a result, the phenomenology inspired by this expectation has had influence on a sizable part of the recent quantum-gravity literature. My goal here is primarily the one of reviewing the main results and proposals produced by this emerging area of phenomenology centered on the possibility of spacetime quantization.

1.2 Quantum spacetime vs quantum black hole and graviton exchange

The notion of “quantum-gravity research” can have a different meaning for different researchers. This is due both to the many sides of the quantum-gravity problem and the fact that most researchers arrive to the study of quantum gravity from earlier interests in other areas of physics research. Because of its nature the quantum-gravity problem has a different appearance, for example, to a particle physicist and to a relativist.

In particular, this affects the perception of the implications of the “double role” of gravitational fields: unlike all other fields studied in fundamental physics the gravitational field is not just used to describe “gravitational interactions” but also characterizes the structure of spacetime itself. The structure of Einstein’s theory of gravitational phenomena tells us both of the geometry of spacetime, which should be described in terms of smooth Riemannian manifolds, and of the implications of Einstein’s equations for dynamics. But in most approaches these two sides of gravity are not handled on the same footing. Particularly from the perspective of a particle physicist it makes sense to focus on contexts amenable to treatment assuming some given Riemannian-manifold spacetime background and “gravitons” as mediators of “perturbative gravitational interactions”. Other researchers, typically not coming from a particle-physics background, are instead primarily interested in speculations for how to replace Riemannian manifolds in the description of the structure of spacetime, and contemplating a regime describing perturbative gravitational interactions is not one of their main concerns.

Because of my objectives, it is appropriate for me to locate early on in this review the quantum-spacetime issues within the broader spectrum of quantum-gravity research.

1.2.1 The quantum-black-hole regime

We do expect that there is a regime of physics where quantum gravity does not simply amount to small corrections to our currently adopted theories, but rather our current theories should be there completely inapplicable. An example of this is the class of hypothetical situations discussed in my opening remarks: if we consider a collision with impact parameter on the order of the Planck length between two particles, which exchange in the collision an energy on the order of the Planck scale, then our current theories do not even give us a reliable first approximation of the outcome.

Such collisions would create a concentration of energy comparable to the Planck scale in a region of Planck-length size. And we have no previous experience with systems concentrated in a Planck-length region with rest energyFootnote 1 on the order of the Planck scale. In such cases, the pillars of our current description of the laws of physics come in very explicit conflict. On one side, we have quantum mechanics, with its characteristic property that a rest energy M can only be localized within a region the size of the Compton wavelength

$${r_C}\sim {\hbar \over M}.$$

On the other side, GR assigns to any localized (point-like) amount M of rest energy a region of size its Schwarzschild radius

$${r_S}\sim {G_N}M\sim {{\ell _p^2M} \over \hbar},$$

where G N denotes Newton’s constant and p denotes the Planck length (\({\ell _p} \equiv \sqrt {\hbar G} \sim {10^{- 35}}\), in units with speed-of-light scale set to unity, c = 1).

If Mℏ/ℓ p (rest energy on the order of the Planck scale), the Compton and Schwarzschild radii are of the same order of magnitude and quantum mechanics cannot ignore gravitation but at the same time gravitation cannot ignore quantum mechanics. Evidently we can get nowhere attempting to investigate this issue by just combiningFootnote 2 somehow the Standard Model of particle physics and the general-relativistic classical description of gravitational phenomena.

Another context of similar conceptual content can be imagined if we take for granted (which we can do only as working assumptions) the existence of Hawking radiation. We could then start with an isolated macroscopic black hole and attempt to describe its whole future evolution. As long as the black hole remains macroscopic, but loses weight through Hawking radiation, we can imagine to be able to devise a reliable first approximation. But when the black hole reaches Planck-length size (and Planck-scale rest energy) we are again left without any even approximate answers.

The description of these types of “quantum-black-hole regimes” (description that I shall use rather generically, including for example the regime characteristic of the very early Universe) is evidently an example of cases such that we could only have a satisfactory picture by understanding how both of the two roles of the gravitational fields need to be revised (how spacetime structure should be then described and how the gravitational-interaction aspects of gravitation should be then described).

Providing a description of such a quantum-black-hole regime is probably the most fascinating challenge for quantum-gravity research, but evidently it is not a promising avenue for actually discovering quantum-gravity effects experimentally. As I shall mention, somewhat incidentally in a few points of this review, this expectation would change if, surprisingly, gravitation turns out to be much stronger than we presently expect, so that at least in some contexts its strength is not characterized by the Planck scale. But this review adopts the conservative view that quantum-gravity effects are at least roughly as small as we expect, and, therefore, characterized roughly by the Planck scale. And if that is the case, it is hard to even imagine a future in which we gain access to a quantum-black-hole regime.

A key assumption of this review is that quantum gravity will manifest itself experimentally in the shape of small corrections to contexts, which we are able to describe in first approximation within our current theories.

1.2.2 The graviton-exchange regime

For particle physicists (and, therefore, for at least part of the legitimate overall perspective on the quantum-gravity problem) the most natural opportunities in which quantum gravity could introduce small corrections are in contexts involving the gravitational-interaction aspects of quantum gravity.

Rather than attempting to give general definitions let me offer a clear example. These are studies of long-range corrections to the Newtonian limit of gravitation, where gravity does look like a Newton-force interaction. By focusing on long-range features one stays far from the trouble zone mentioned in the previous Section 1.2.1. But there are still issues of considerable interest, at least conceptually, that quantum gravity should address in that regime. It is natural to expect that the description of gravity in terms of a Newton-force interaction would also show traces of the new laws that quantum gravity will bring about.

This possibility can be investigated coherently (but without any guarantee of a reliable answer) with effective-field-theory techniques applied to the nonrenormalizable theory of quantum gravity obtained by linearizing the Einstein-Hilbert theory before quantization. It essentially turns into an exercise of exploring the properties that such an effective theory attributes to gravitons. And one does derive a correction to Newton’s potential with behavior [210, 27, 329, 120, 324, 449]

$$\Delta {V_{{\rm{Newton}}}}\sim {{\hbar {G^2}M} \over {{r^3}}}\sim {{\ell _p^2} \over {{r^2}}}{V_{{\rm{Newton}}}},$$

where M is the mass of the source of the gravitational potential and on the right-hand side I highlighted the fact that this correction would come in suppressed with respect to the standard leading Newtonian term by a factor given by the square of the ratio of the Planck length versus the distance scale at which the potential is probed.

This illustrates the sort of effects one may look for within schemes centered on a background Minkowski spacetime and properties of the graviton. In this specific case the effect is unmanageably small,Footnote 3 but in principle one could look for other effects of this sort that might be observably large in some applications.

1.2.3 The quantum-spacetime regime

Having given some examples of the ways in which quantum gravity might change our description of gravitational interactions, let me now turn to the complementary type of issues that are in focus when one studies the idea of spacetime quantization.

The nature of the quantum-gravity problem tells us in many ways that the ultimate description of spacetime structure is not going to be in terms of a smooth classical geometry. We do not have at present enough information to deduce how our formalization of spacetime should change, but it must change. The collection of arguments in support of this expectation (see, e.g., Refs. [406, 532, 269, 44, 332, 442, 211, 20, 432, 50, 249, 489]) is impressive and relies both on aspects of the quantum-gravity problem and on analyses of proposed approaches to the solution of the quantum-gravity problem.

Surely, some very dramatic manifestations of spacetime quantization should be expected in what I labeled as the quantum-black-hole regime. But, as already stressed above, it is hard to even imagine managing to derive evidence of spacetime quantization from experimental access to that regime. It is easy to see that our best chance for uncovering non-classical properties of spacetime is to focus on the implications of spacetime quantization for the “Minkowski limit” (or perhaps the “de Sitter limit”) of quantum gravity. Our data on contexts we presently describe as involving particle propagation in a background Minkowski spacetime is abundant and of high quality. If the fundamental formalization of spacetime is not in terms of a smooth classical geometry then we should find some traces of spacetime quantization also in those well-studied contexts. The effects are likely to be very small, but the quality of the data available to us in this quantum-spacetime regime is very high, occasionally high enough to probe spacetime structure with Planck-scale sensitivity.

This is the main theme of my review. I do not elaborate further on it here since it will take full shape in the following.

1.2.4 Aside on the classical-gravity regime

It is an interesting aspect of how the quantum-gravity community is fragmented to observe that it is sometimes difficult to explain to a relativist how graviton-exchange studies could be seen as part of quantum-gravity research and it is difficult to explain to particle physicists how studies of particles not interacting gravitationally in a quantum-spacetime can play a role in quantum-gravity research. I hope this Section proves useful in this respect.

Let me also discuss one more aspect of the interplay between quantum mechanics and gravitation that is of interest from a quantum-gravity perspective, even though at first sight it does not look like quantum gravity at all. These are studies of quantum mechanics in a curved background spacetime, without assuming spacetime is quantized and without including any graviton-like contribution to the interactions. No aspect of gravity is quantized in such studies, but they concern a regime that must be present as a limiting case of quantum gravity, and, therefore, by studying this regime we are establishing constraints on how quantum gravity might look.

On the conceptual side perhaps the most significant example of how quantum mechanics in curved spacetime backgrounds can provide important hints toward quantum gravity is provided by studies of black-hole thermodynamics. And it is a regime of physics where we do have some experimental access mainly through studies of the quantum properties of particles in cases when the geometry of spacetime near the surface of the Earth (essentially gravity of Earth, the acceleration g) does matter. I shall mention a couple of these experimental studies in the next Section 1.3.

While my focus here is on quantum-spacetime studies, it will occasionally be useful for me to adopt the perspective of quantum mechanics in curved classical background spacetimes.

1.3 20th century quant um-gravity phenomenology

In order to fully expose the change of perspective, which matured over the last decade, it is useful to first discuss briefly some earlier analyses that made contact with experiments/observations and are relevant for the understanding of the interplay between GR and quantum mechanics.

Some of the works produced by Chandrasekhar in the 1930s already fit this criterion. In particular, the renowned Chandrasekhar limit [164, 165], which describes the maximum mass of a white-dwarf star, was obtained introducing some quantum-mechanical properties of particles (essentially Pauli’s exclusion principle) within an analysis of gravitational phenomena.

A fully rigorous derivation of the Chandrasekhar limit would require quantum gravity, but not all of it: it would suffice to master one special limit of quantum gravity, the “classical-gravity limit”, in which one takes into account the quantum properties of matter fields (particles) in the presence of rather strong spacetime curvature (treated, however, classically). By testing experimentally the Chandrasekhar-limit formula, one is, therefore, to some extent probing (the classical-gravity limit of) quantum gravity.

Also relevant to the classical-gravity limit of quantum gravity are the relatively more recent studies of the implications of the Earth’s gravitational field in matter-interferometry experiments. Experiments investigating these effects have been conducted since the mid 1970s and are often called “COW experiments” from the initials of Colella, Overhauser and Werner who performed the first such experiment [177]. The main target of these studies is the form of the Schrödinger equation in the presence of the Earth’s gravitational field, which could be naturally conjectured to be of the formFootnote 4

$$\left[ {- \left({{1 \over {2{M_I}}}} \right){{\vec \nabla}^2} + {M_G}\phi (\vec r)} \right]\psi (t,\vec r) = i{{\partial \psi (t,\vec r)} \over {\partial t}}$$
(1)

for the description of the dynamics of matter (with wave function \(\psi (t,\vec r)\) in the presence of the Earth’s gravitational potential \(\phi (\vec r)\).

The COW experiments exploit the fact that the Earth’s gravitational potential puts together the contributions of a very large number of particles (all the particles composing the Earth) and, as a result, in spite of its per-particle weakness, the overall gravitational field is large enough to introduce observable effects.

Valuable reading material relevant for these COW experiments can be found in Refs. [484, 252, 14]. While the basic message is that a gravity-improved Schrödinger equation of the form (1) is indeed essentially applicable, some interesting discussions have been generated by these COW experiments, particularly as a result of the data reported by one such experiment [369] (data whose reliability is still being debated), which some authors have interpreted as a possible manifestation of a violation of the equivalence principle.

In the same category of studies relevant for the classical-gravity limit of quantum gravity I should mention some proposals put forward mainly by Anandan (see, e.g., Ref. [77, 76]), already in the mid 1980s, and some very recent remarkable studies that test how the gravitational field affects the structure of quantum states, such as the study reported in Ref. [428] that I shall discuss in some detail later in this review.

Evidently, the study of the classical limit provides only a limited window on quantum gravity, and surely cannot provide any insight on the possibility of short-distance spacetime quantization, on which I shall focus here.

A list of early examples of studies raising at least the issue that spacetime structure might one day be probed with Planck-scale sensitivity should start with the arguments reported by Mead in 1965 [407]. There, Mead contemplated the broadening of spectral lines possibly resulting from adopting the Planck length as the value of the minimum possible uncertainty in position measurements. Then, in works published in the 1980s and early 1990s, there were a few phenomenological studies, adopting the Planck scale as target and focusing essentially on the possibility that quantum-mechanical coherence might be spoiled by quantum-gravity effects. One example is provided by the studies of Planck-scale-induced CPT-symmetry violation and violations of ordinary quantum mechanics reported in Refs. [219, 220] and references therein (also see, for aspects concerning mainly the CPT-symmetry aspects, Refs. [298, 108]), which are particularly relevant for the analysis of data [13] on the neutral-kaon system. A quantization of spacetime is encoded in the non-critical-string-theory formalism adopted in Refs. [219, 220], but only to the extent that one can view as such the novel description of time there adopted. A similar characterization applies to the studies reported in Refs. [452, 453, 454], which considered violations of ordinary quantum mechanics of a type describable in terms of the “primary-state-diffusion” formalism, with results that could be relevant for atom interferometry. Also in Refs. [452, 453, 454] the main quantum-spacetime feature is found in the description of time.

From a broader quantum-gravity-problem perspective I should also mention the possibility of violations of CPT and Lorentz symmetry within string theory analyzed in Refs. [347, 345]. These studies, like most phenomelogy-relevant studies inspired by string theory (see related comments later in this review), do not involve any spacetime quantization and do not necessarily imply that the magnitude of the effects is set by a Planckian scale. But they should nonetheless be prominently listed among the early proposals assuming that some of the theories used in quantum-gravity research might be testable with currently-available experimental techniques.

1.4 Genuine Planck-scale sensitivity and the dawn of quantum-spacetime phenomenology

The rather isolated proposals that composed “20th-century quantum-gravity phenomenology” were already rather significant. In particular, some of these studies, perhaps most notably the ones in Ref. [220] and Ref. [454], were providing first preliminary evidence of the fact that it might be possible to investigate experimentally the structure of spacetime at the Planck scale, which is expected to be the main key to the understanding of the quantum-gravity realm, and should involve spacetime quantization. But, in spite of their objective significance, these studies did not manage to have an impact on the overall development of quantum-gravity research. For example, all mainstream quantum-gravity reviews up to the mid 1990s still only mentioned the “experiments issue” in the form of some brief folkloristic statements, such as “the only way to test Planck scale effects is to build a particle accelerator all around our galaxy”.

The fact that up to the mid 1990s the possibility of a quantum-spacetime phenomenology was mostly ignored, resulted in large part from a common phenomenon of “human inertia” that affects some scientific communities, but some role was also played by a meaningful technical observation: the studies available up to that point relied on models with the magnitude of the effect set by a free dimensionless parameter, and at best the sensitivity of the experiment was at a level such that one could argue setting the value of the dimensionless parameter as a ratio between the Planck length and one of the characteristic length scales of the relevant physical context. It is true that this kind of dimensional-analysis reasoning does not amount to really establishing that the relevant candidate quantum-gravity effect is being probed with Planck-scale sensitivity, and this resulted in a perception that such studies, while deserving some interest, could not be described objectively as probes of the quantum-gravity realm. For some theorists a certain level of uneasiness also originated from the fact that the formalisms adopted in studies such as the ones in Ref. [220] and Ref. [454] involved rather virulent departures from quantum mechanics.

Still, it did turn out that those earlier attempts to investigate the quantum-gravity problem experimentally were setting the stage for a wider acceptance of quantum-spacetime phenomenology. The situation started to evolve rather rapidly when in the span of just a few years, between 1997 and 2000, several analyses were produced describing different physical contexts in which effects introduced genuinely at the Planck sale could be tested. It started with some analyses of observations of gamma-ray bursts at sub-MeV energies [66, 247, 491], then came some analyses of large laser-light interferometers [51, 54, 53, 433], quickly followed by the first discussions of Planck scale effects relevant for the analysis of ultra-high-energy cosmic rays [327, 38, 73] and the first analyses relevant for observations of TeV gamma rays from blazars [38, 73, 463] (also see Ref. [331, 119]).

In particular, the fact that some of these analyses (as I discuss in detail later) considered Planck-scale effects amounting to departures from classical Lorentz symmetry played a key role in their ability to have an impact on a significant portion of the overall quantum-gravity-research effort. Classical Lorentz symmetry is a manifestation of the smooth (classical) light-cone structure of Minkowski spacetime, and it has long been understood that by introducing new “quantum features” (e.g., discreteness or noncommutativity of the spacetime coordinates) in spacetime structure, as some aspects of the “quantum-gravity problem” might invite us to do, Lorentz symmetry may be affected. And the idea of having some departure from Lorentz symmetry does not necessarily require violations of ordinary quantum mechanics. Moreover, by offering an opportunity to test quantum-gravity theories at a pure kinematical level, these “Lorentz-symmetry-test proposals” provided a path toward testability that appeared to be accessible even to the most ambitious theories that are being considered as candidates for the solution of the quantum gravity problem. Some of these theories are so complex that one cannot expect (at least not through the work of only a few generations of physicists) to extract all of their physical predictions, but the kinematics of the “Minkowski limit” may well be within our reach. An example of this type is provided by Loop Quantum Gravity (LQG) [476, 96, 502, 524, 93], where one is presently unable to even formulate many desirable physics questions, but at least some (however tentative) progress has been made [247, 33, 523, 75, 128] in the exploration of the kinematics of the Minkowski limit.

From a pure-phenomenology perspective, the late-1990s transition is particularly significant, as I shall discuss in greater detail later, in as much as it marks a sharp transition toward falsifiability. Some of the late-1990s phenomenology proposals concern effects that one can imagine honestly deriving in a given quantum-gravity theory. Instead the effects described in studies such as the ones reported in Ref. [220] and Ref. [454] were not really derived from proposed models but rather they were inspired by some paths toward the solution of the quantum-gravity problem (the relevant formalisms were not really manageable to the point of allowing a rigorous derivation of the nature and size of the effects under study, but some intuition for the nature and size of the effects was developed combining our limited understanding of the formalisms and some heuristics). Such a line of reasoning is certainly valuable, and can inspire some meaningful “new physics” experimental searches, but if the results of the experiments are negative the theoretical ideas that motivated them are not falsified: when the link from theory to experiments is weak (contaminated by heuristic arguments) it is not possible to follow the link in the opposite direction (use negative experimental results to falsify the theory). Through further developments of the work that started in the late 1990s we are now getting close to taking quantum-spacetime phenomenology from the mere realm of searches of quantum-spacetime effects (which are striking if they are successful but have limited impact if they fail) to the one of “falsification tests” of some theoretical ideas. This is a point that I am planning to convey strongly with some key parts of this review, together with another sign of maturity of this phenomenology: the ability to discriminate between different (but similar) Planck-scale physics scenarios. In order for a phenomenology to even get started one must find some instances in which the new-physics effects can be distinguished from the effects predicted by current theories, but a more mature phenomenology should also be able to discriminate between similar (but somewhat different) new-physics scenarios.

Together with some (however slow) progress toward establishing the ability to falsify models and discriminate between models, the phenomenology work of this past decade has also shown that the handful of examples of “Planck-scale sensitivities” that generated excitement between 1997 and 2000 were not a “one-time lucky streak”: the list of examples of experimental/observational contexts in which sensitivity to some effects introduced genuinely at the Planck scale is established (or found to be realistically within reach) has continued to grow at a steady pace, as the content of this review will indicate, and the number of research groups joining the quantum-spacetime-phenomenology effort is also growing rapidly. And it is not uncommon for recent quantum-gravity reviews [91, 475, 501, 151], even when the primary focus is on developments on the mathematics side, to discuss in some detail (and acknowledge the significance of) the work done in quantum-gravity phenomenology.

1.5 A simple example of genuine Planck-scale sensitivity

So far, my preliminary description of quantum-spacetime phenomenology has a rather abstract character. It may be useful to now provide a simple example of analysis that illustrates some of the concepts I have discussed and renders more explicit the fact that some of the sensitivity levels now available experimentally do correspond to effects introduced genuinely at the Planck scale.

These objectives motivate me to invite the reader to contemplate the possibility of a discretization of spacetime on a lattice with \(E_p^{- 1}\) lattice spacing and a free particle propagating on such a spacetime. It is well established that in these hypotheses there are \(E_p^{- 2}\) corrections to the energy-momentum on-shell relation, which in general are of the typeFootnote 5

$${m^2} \simeq {E^2} - {\vec p^2} + \sum\limits_{\{{m_\mu}\}} {{\eta _{{m_0},{m_1},{m_2},{m_3}}}\left({{{{E^{{m_0}}}p_1^{{m_1}}p_2^{{m_2}}p_3^{{m_3}}} \over {E_p^2}}} \right)} + O\left({{{{E^6}} \over {E_p^4}}} \right),$$
(2)

where the non-negative integers {m μ } are such that m0+ m1 + m2 + m3 = 4, and the parameters \({\eta _{{m_0},{m_1},{m_2},{m_3}}}\),m1,m2,m3, which for \(E_p^{- 1}\) lattice spacing typically turn out to be of order 1 (when non-zero), reflect the specifics of the chosen discretization.

I should stress that the idea of a rigid-lattice description of spacetime is not really one of the most advanced for quantum-gravity research (but see the recent related study in Ref. [114]). Moreover, while it is easy to describe a free particle on such a lattice, the more realistic case of interacting fields is very different, and its implications for the form of the on-shell relation are expected to be significantly more complex than assumed in Eq. (2). In particular, if described within effective field theory, the implications for interacting theories of such a lattice description of spacetime include departures from special-relativistic on-shellness for which there is no Planck-scale suppression, and are therefore unacceptable. This is due to loop corrections, through a mechanism of the type discussed in Refs. [455, 182, 515, 190] (on which I shall return later), and assumes one is naturally unwilling to contemplate extreme fine-tuning. I feel it is nonetheless very significant that the, however, unrealistic case of a free particle propagating in a lattice with Planck-scale lattice spacing leads to features of the type shown in Eq. (2). It shows that features of the type shown in Eq. (2) have magnitude set by nothing else but a feature of Planck-scale magnitude introduced in spacetime structure. So, in spite of the idealizations involved, the smallness of the effects discussed in this Section is plausibly representative of the type of magnitude that quantum-spacetime effects could have, even though any realistic model of the Standard Model of particle physics in a quantum spacetime, should evidently remove those idealizations.

One finds that in most contexts corrections to the energy-momentum relation of the type in Eq. (2) are completely negligible. For example, for the analysis of center-of-mass collisions between particles of energy ∼ 1 TeV (such as the ones studied at the LHC) these correction terms affect the analysis at the level of 1 part in 1032. However (at least if such a modified dispersion relation is part of a framework with standard laws of energy-momentum conservation), one easily finds [327, 38, 463, 73] significant implications for the cosmic-ray spectrum. In particular, one can consider the “GZK cutoff” (named after Greisen-Zatsepin-Kuzmin), which is a key expected feature of the cosmic-ray spectrum, and is essentially given by the threshold energy for cosmicray protons to produce pions in collisions with cosmic microwave background radiation (CMBR) photons. In the evaluation of the threshold energy for p + γcmbrp + π, the \(1/E_p^2\) correction terms of (2) can be very significant. As I shall discuss in greater detail in Section 3.5, whereas the classical-spacetime prediction for the GZK cutoff is around 5·1019 eV, a much higher value of the cutoff is naturally obtained [327, 38, 463, 73] in frameworks with the structure of Eq. (2). The Planck-scale correction terms in Eq. (2) turn into corresponding correction terms for the threshold-energy formula, and the significance of these corrections can be roughly estimated with \(\eta {E^4}/(\epsilon E_p^2)\), where E is the energy of the cosmic-ray proton and ϵ is the energy of the CMBR photon, to be compared to m2/16 , where m here is the proton mass, which roughly gives the GZK scale. Adopting the “typical quantum-gravity estimate”Footnote 6 |η| ∼ 1 it turns out that in the GZK regime the ratio E/m is large enough to compensate for the smallness of the ratio E/E p , so that a term of the type \({E^4}/(\epsilon E_p^2)\) is not negligible with respect to m2/∊. This observation is one of the core ingredients of the quantum-spacetime phenomenology that has been done [327, 38, 463, 73] analyzing GZK-scale cosmic rays. Another key ingredient of those analyses is the quality of cosmicray data, which has improved very significantly over these last few years, especially as a result of observations performed at the Pierre Auger Observatory.

Let me here use this cosmic-ray context also as an opportunity to discuss explicitly a first example of the type of “amplifier” that is inevitably needed in quantum-gravity phenomenology. It is easy to figure out [52, 73] that the large ordinary-physics number that acts as amplifier of the Planck-scale effect in this case is provided by the ratio between a cosmic-ray proton ultra-high energy, which can be of order 1020 eV, and the mass (rest energy) of the proton. This is clearly shown by the comparison I made between an estimate of Planck-scale corrections of order \({E^4}/(\epsilon E_p^2)\) and an estimate of the uncorrected result of order m2/e. Evidently, E/m is the amplifier of the Planck-scale corrections, which also implies that these Planck-scale modifications of the photopion-production threshold formula go very quickly from being significant to being completely negligible, as the proton energy is decreased. A cosmic-ray proton with energy E on the order of 1020 eV is so highly boosted that E/m p ∼ 1011, and this leads to \({E^4}/(\epsilon E_p^2) \sim {m^2}/\epsilon\) in my estimates, but at accelerator-accessible proton energies (and proton boosts with respect to its rest frame) the correction is completely negligible. According to traditional quantum-gravity arguments, which focus only on the role played by the ratio E/E p , one should assume that this analysis could be successful only when E/E p ∼ 1; clearly instead this analysis is successful already at energies of order 1020 eV (i.e., some 8 orders of magnitude below the Planck scale). And this is not surprising since the relevant Planck-scale effect is an effect of Lorentz symmetry violation, so that large boosts (i.e., in this context, large values of E/m p ) can act as powerful amplifiers of the effect, even when the energies are not Planckian.

1.6 Focusing on a neighborhood of the Planck scale

There are a strikingly large number of arguments pointing to the Planck scale as the characteristic scale of quantum-gravity effects. Although clearly these arguments are not all independent, their overall weight must certainly be judged as substantial. I shall not review them here since they can easily be found in several quantum-gravity reviews, and there are even some dedicated review papers (see, e.g., Ref. [249]). Faithful to the perspective of this review, I do want to stress one argument in favor of the Planck scale as the quantum-gravity/quantum-spacetime scale, which is often overlooked, but is in my opinion particularly significant, especially since it is based (however indirectly) on experimental facts. These are the well-known experimental facts pointing to a unification of the coupling “constants” of the electroweak forces and of the strong force. While gravity usually is not involved in arguments that provide support for unification of the nongravitational couplings, it is striking from a quantum-gravity perspective that, even just using the little information we presently have (mostly at scales below the TeV scale), our present best extrapolation of the available data on the running of these coupling constants rather robustly indicates that there will indeed be a unification and that this unification will occur at a scale that is not very far from the Planck scale. In spite of the fact that we are not in a position to exclude that it is just a quantitative accident, this correspondence between (otherwise completely unrelated) scales must presently be treated as the clearest hint of new physics that is available to us.

As hinted in Figure 1, the present (admittedly preliminary) status of our understanding of this “unification puzzle” might even suggest that there could be a single stage of full unification of all forces, including gravity. However, according to the arguments that are presently fashionable among theoretical physicists, it would seem that the unification of nongravitational coupling constants should occur sizably above the scale of (1027 eV)−1 (presently preferred is a value close to (2·1026 eV)−1) and at such relatively large distance scales gravity should still be too weak to matter, since it is indeed naively expected that gravity should be able to compete with the other forces only starting at scales as short as the Planck length, of ∼ (1028 eV)−1.

Figure 1
figure 1

The figure shows semi-quantitatively the expected unification of the coupling “constants” of the Standard Model of particle physics, and also shows a naive description (which, however, we are so far unable to improve upon) of the strength of gravitational interactions, obtained by dividing the Newton constant by the square of the length scale characteristic of the process.

Even setting aside this coupling-unification argument, there are other compelling reasons for attributing to the Planck scale the role of characteristic sale of quantum-gravity effects. In particular, if one adopts the perspective of the effective-quantum-field-theory description of gravitational phenomena the case for the Planck scale can be made rather precisely. A particularly compelling argument in this respect is found in Ref. [276] which focuses on the loss of unitarity within the effective-quantum-field-theory description of gravitational phenomena. Unitarity has been a successful criterion for determining the scale at which other effective quantum field theories break down, such as the Fermi theory of weak interactions. And it does turn out that the scale at which unitarity is violated for the effective-quantum-field-theory description of gravitational phenomena is within an order of magnitude of the Planck scale [276].

But it appears legitimate to consider alternatives to such estimates. For example, some authors (see, e.g., Ref. [146]) consider it to be likely that the “effective Newton constant” is also affected by some sort of renormalization-group running, and, if this is the case, then the prospects of all these arguments would change significantly. For the length scale of spacetime quantization, QST, naively assumed to be given by \(\sqrt {{G_N}(\infty)}\), where G N (∞) is the measured value of the Newton constant (characteristic of gravity at large distances), any running of gravity would imply an estimateFootnote 7 of the type \({\ell _{{\rm{QST}}}} \sim \sqrt {{G_N}({\ell _{{\rm{QST}}}})}\).

In relation to estimates of the scale of spacetime quantization these considerations should invite us to consider the Planck length, ∼ 10−35 m only as a crude, very preliminary estimate. Throughout this review I shall tentatively take into account this issue by assuming that the scale where nonclassical properties of spacetime emerge should be somewhere between ∼ 10−32 m and ∼ 10−38 m, hoping that three orders of magnitude of prudence from above and below should suffice.

It is striking that these considerations also allow one to be more optimistic with respect to the (already intrinsically appealing [473]) hypothesis of a single stage of unification of all forces, possibly even at distance scales as “large” as (1026 eV)−1 ≃ 10−33 m. And I find that, in relation to this issue, the recent (mini-)burst of interest in the role of gravity in unification is particularly exciting. A convincing case is being built concerning the possibility that gravity might affect the running of the Standard-Model coupling constants, and this too could have significant effects for the estimate of the unification scale (see, e.g., Refs. [473, 529] and references therein). And in turn there is a rather robust argument (see, e.g., Refs. [146, 147] and references therein) suggesting that the other fields might significantly affect the strength of gravity.

My personal perspective on the overall balance of this limited insight that is available to us is summarized by the attitude I adopted for this review in relation to the expectations for the value of the quantum-spacetime scale. Unsurprisingly, I give top priority for this to the only (and however faint) indication we have from experiments: the values measured for coupling constants at presently accessible “ultra-large” distance scales appear to be arranged in such a way as to produce a unification of nongravitational forces at a much smaller length scale, which happens to be not distant from where we would naively expect gravity to come into the picture. This in some sense tells us that our naive estimate of where gravity becomes “strong” (and spacetime turns nonclassical) cannot be too far off the mark. But at the same time imposes upon us at least a certain level of prudence: we cannot assume that the quantum-spacetime scale is exactly the Planck length, but we have some encouragement for assuming that it is within a few orders of magnitude of the Planck length.

In closing this long aside on the quantum-gravity/quantum-spacetime scale, let me stress that even prudently assuming a few orders of uncertainty above and below the Planck length is not necessarily safe. It is in my opinion the most natural working assumption in light of the information presently available to us, but we should be fully aware of the fact that our naive estimates might be off by more than a few orders of magnitude. Following the line of reasoning adopted here this would take the shape of a solution for \({\ell _{{\rm{QST}}}} \sim \sqrt {{G_N}({\ell _{{\rm{QST}}}})}\) that unexpectedly turned out to be wildly different form the Planck length. The outlook of the analysis of the unification of forces appears to discourage such speculations, but we must be open to the possibility that the story here summarized in Figure 1 might just be a cruel numerical accident (more on this toward the end of this review, when I briefly consider the “large extra dimensions” scenario).

1.7 Characteristics of the experiments

Having commented on the first “ingredient” for the search of experiments relevant for quantum spacetime and quantum gravity, which is the estimate of the characteristic scale of this new physics, let me next comment on a few other ingredients, starting with some intuition for the type of quantum-spacetime effects that one might plausibly look for, and what that requires.

As stressed earlier in this section, we cannot place much hope of experimental breakthroughs in the full quantum-black-hole regime. Our best chances are for studies of contexts amenable to a description in terms of the properties of particles in a background quantum spacetime. And, as also already stressed, these effects will be minute, with magnitude governed by some power of the ratio between the Planck length and the wavelength of the particles involved.

The presence of these suppression factors on the one hand reduces sharply our chances of actually discovering quantum-spacetime effects, but on the other hand simplifies the problem of figuring out what are the most promising experimental contexts, since these experimental contexts must enjoy very special properties that would not easily go unnoticed. For laboratory experiments, even an optimistic estimate of these suppression factors leads to a suppression of order 10−16, which one obtains by assuming (probably already using some optimism) that at least some quantum-gravity effects are only linearly suppressed by the Planck length and taking as particle wavelength the shorter wavelengths we are able to produce (∼ 10−19 m). In astrophysics (which, however, limits one to “observations” rather than “experiments”) particles of shorter wavelength are being studied, but even for the highest energy cosmic rays, with energy of ∼ 1020 eV and, therefore, wavelengths of ∼ 10−27 m, a suppression of the type L p /λ would take values of order 10−8. It is mostly as a result of this type of consideration that traditional quantum-gravity reviews considered the possibility of experimental studies with unmitigated pessimism. However, the presence of these large suppression factors surely cannot suffice for drawing any conclusions. Even just looking within the subject of particle physics we know that certain types of small effects can be studied, as illustrated by the example of the remarkable limits obtained on proton instability. The prediction of proton decay within certain grand unified theories of particle physics is really a small effect, suppressed by the fourth power of the ratio between the mass of the proton and grand-unification scale, which is only three orders of magnitude smaller than the Planck scale. In spite of this horrifying suppression, of order [mproton/EGUT]4 ∼ 10−64, with a simple idea we have managed to acquire full sensitivity to the new effect: the proton lifetime predicted by grand unified theories is of order 1039 s and quite a few generations of physicists should invest their entire lifetimes staring at a single proton before its decay, but by managing to keep under observation a large number of protons (think for example of a situation in which 1033 protons are monitored) our sensitivity to proton decay is dramatically increased. In that context the number of protons is the (ordinary-physics) dimensionless quantity that works as “amplifier” of the new-physics effect.

Outside of particle physics more success stories of this type are easily found: think for example of the Brownian-motion studies conducted a century ago. Within the 1905 Einstein description one uses Brownian-motion measurements on macroscopic scales as evidence for the atomic structure of matter. For the Brownian-motion case the needed amplifier is provided by the fact that a very large number of microscopic processes intervenes in each single macroscopic effect that is being measured.

It is hard but clearly not impossible to find experimental contexts in which there is effectively a large amplification of some small effects of interest. And this is the strategy that is adopted [52] in the attempts to gain access to the Planck-scale realm.

1.8 Paradigm change and test theories of not everything

Something else that characterizes the work attitude of the community, whose results I am here reviewing, is the expectation that the solution of the quantum-gravity problem will require a significant change of theory paradigm. Members of this community find in the structure of the quantum-gravity problem sufficient elements for expecting that the transition from our current theories to a successful theory of quantum gravity should be no less (probably more) significant then the transition from classical mechanics to quantum mechanics, the prototypical example of a change of theory paradigm.

This marks a strong difference in intuition and methodology with respect to other areas of quantum-gravity research, which do not assume the need of a paradigm change. If the string theory program turned out to be successful then quantum gravity should take the shape of just one more (particularly complex but nonetheless consequential) step in the exploitation of the current theory paradigm, the one that took us all the way from QED to the Standard Model of particle physics.

This difference of intuitions even affects the nature of the sort of questions the different communities ask. The expectation of those not preparing for a change of theory paradigm is that one day some brilliant mind will wake up with the correct full quantum-gravity theory, with a single big conceptual jump. What is expected is a single big conceptual step leading to a theory that describes potentially everything we know so far.Footnote 8 Something of the sort of the discovery of QCD: a full theory even though some of its answers to our questions are not immediately manifest once the theory is written out (see, e.g., confinement).

The expectation of those who are instead preparing for a change of theory paradigm is that we will get to a mature formulation of quantum gravity only at the end of a multi-step journey, with each step being of rather humble nature. The model here of is the phase of the “old quantum theory”. The change of theory paradigm in going from classical mechanics to quantum mechanics was of such magnitude that we could not possibly get it right in one single jump. Imagine someone, however brilliant, looking at black-body radiation and proposing a solution based on observables described as self-adjoint operators on Hilbert spaces and all that. Planck’s description of black-body radiation was very far from being a full formalization of quantum mechanics, and was even internally unsatisfactory, with a very limited class of contexts and regimes where it could be applied. It was a theory of very few things, but it was a necessary step toward quantum mechanics. A similar role in the gradual emergence of quantum mechanics was played by other theories of limited scope, such as Einstein’s description of the photoelectric effect, Bohr’s description of atoms, and the successful proposal by de Broglie that wave-particle duality should be applied also to matter.

So, while those not preparing for a change of paradigm look for theories of everything, we are looking for theories of very few things, like Planck, Einstein, Bohr, de Broglie and other great contributors to the ultimate advent of quantum mechanics. Let me here add that even when exploiting a successful theory paradigm, often the next level of exploitation still requires us taking some clumsy steps based on theories of few things. Consider Fermi’s description of weak interactions in terms of four-fermion-vertex processes. Fermi’s theory can be applied to a limited class of phenomena and only in a relatively narrow regime, and it is even a theory that is not satisfactory from the perspective of internal logical consistency. Yet Fermi’s theory was an important and necessary step toward richer and more satisfactory descriptions of weak interactions.

The difference in methodologies is also connected with some practical considerations, connected with the fact that the formalisms presently being considered as solutions for the quantum-gravity problem are so complex that very little is understood of their truly physical implications. Some theories of few things can even be inspired by a given theory of everything: since it is de facto impossible to compare to data present full candidates for quantum gravity one ends up comparing to data the predictions of an associated “test theory”, a model that is inspired by some features we do understand (usually not more than qualitatively or semi-quantitatively) of the original theory, but casts them within a simple framework that is well suited for comparison to experiments (but for which there is no actual guarantee of full compatibility with the original theory).

So, in the eyes of some workers these test theories of few things are needed to bridge the gap between the experimental data and our present understanding of the relevant formalisms. In the eyes of others the test theories of few things are just attempts to bridge the gap between the experimental data available to us and our limited understanding of the quantum-gravity problem.

Essentially in working in quantum-spacetime phenomenology one must first develop some intuition for some candidate quantum-spacetime effects. And this can come either from analyzing the structure of the formalisms that are being considered in the search of a solution to the quantum-gravity problem or from analyzing the structure of the quantum-gravity problem. Once a class of effects is deemed of interest some test theories of these candidate effects must be developed so that they can be used as guidance for experimental searches.

From the perspective of a phenomenologist, some carefully tailored test theories can also be valuable as some sort of common language to be used in assessing the progresses made in improving the sensitivity of experiments, a language that must be suitable both for experimentalists and for those working on the development of quantum-gravity theories.

The possibility to contemplate such “quantum-gravity theories of not everything” is facilitated by the fact that the “quantum-gravity problem” can be described in terms of several “subproblems”, each challenging us perhaps as much as some full open problems of other areas of physics. To mention just a few of these “subproblems” let me notice that: (i) it appears likely that the solution of this problem requires a nonclassical description of spacetime geometry, (ii) quantum gravity might have to be profoundly different (from an “information-theory perspective”) from previous fundamental-physics theories, as suggested by certain analyses of the evolution of pure states in a black-hole background, (iii) the perturbative expansions that are often needed for the analysis of experimental data might require the development of new techniques, since it appears that the ones that rely on perturbative renormalizability might be unavailable, and (iv) we must find some way to reconcile general-relativistic background independence with the apparent need of quantum mechanics to be formulated in a given background spacetime.

For each of these aspects of the quantum-gravity problem we can, in principle, attempt to devise formalisms, intended as descriptions of those regimes of the quantum-gravity realm that are dominantly characterized by the corresponding features.

1.9 Sensitivities rather than limits

In providing my description of the present status of quantum-spacetime phenomenology, I shall adopt as my “default mode” that of characterizing the sensitivities that are within reach for certain classes of experiments/observations, with only a few cases where I discuss both sensitivities and available experimental limits. The analysis of sensitivities was the traditional exercise a decade ago, in the early days of modern quantum-spacetime phenomenology, since the key objective then was to establish that sensitivity to effects introduced genuinely at the Planck scale is achievable. In light of the observation I already reported in Section 1.5 (and several other observations reported later in this review), the “case for existence” of quantum-spacetime phenomenology is at this point well settled.

We are now entering a more mature phase in which we start having the first examples of candidate quantum-spacetime effects for which the development of suitable test theories is approaching a level of maturity such that placing experimental bounds (“limits”) on the parameters of these test theories does deserve intrinsic interest. However, at the time of writing, the transition “from sensitivities to limits” is not yet complete. The cases where I will offer comments on available experimental limits are cases for which (in my opinion) this transition has been made satisfactorily. But in several areas of quantum-spacetime phenomenology it is still common practice to discuss experimental bounds on the basis of a single little-understood experimental result (often a single observation in astrophysics) and most of the test theories are not yet developed to the point that we can attach much significance to placing limits on their parameters. This is a key issue, and throughout this review I will find opportunities to discuss in more detail my concerns and offer some remarks that are relevant for completing the needed transition “from sensitivities to limits”. I do plan to regularly update this review, and with each update readers should find the emphasis gradually going more and more from sensitivities to experimental limits.

1.10 Other limitations on the scope of this review

After having clarified that the “default mode” of this review provides descriptions of sensitivities (with occasional characterizations of experimental bounds), I should comment on the types of theory and phenomenology that are the main focus of this review. I have prepared other reviews on these and related topics [52, 62] with a broader perspective but much more limited depth. Here my main focus is to analyze and review in some detail the healthy interface between pure theory and phenomenology of quantum spacetime. I shall mostly describe the phenomenology proposals, but the selection of which proposals should be included is primarily based on their proven ability to motivate developments on the pure-theory side and to react to (take into account adaptively of) the indications that then emerge from these pure-theory studies. This will be the “default mode” of my selection of topics, with some exceptions allowed in cases where I find that there are promising opportunities for such a healthy interface to mature over the next few years.

The net result of these goals of the review produces a certain bias toward proposals for quantum spacetime, which originated from (or were inspired by) the study of LQG and/or the study of Planck-scale spacetime noncommutativity. These are the two areas of pure-theory research in which, so far, the desirable two-way interface has most concretely materialized: pure-theory specialists have redirected part of their work toward the topics that phenomenologists have highlighted as most promising for phenomenology; and the work of quantum-spacetime phenomenologists has been in turn influenced by the results then obtained on the pure-theory side.

In addition to a relatively long list of proposals inspired by LQG and/or by Planck-scale space-time noncommutativity, I shall also comment on a few proposals inspired by other approaches to spacetime quantization (e.g., Causal Sets and Noncritical String Theory). From a broader quantum-gravity-problem perspective one should also consider critical string theory, which actually remains the most studied candidate for quantum gravity. However, I focus here on quantum-spacetime effects and effects whose natural characteristic scale is the Planck scale, whereas the phenomenology proposals so far inspired by the critical-string-theory research program do not revolve around quantum properties of spacetime and often the characteristic scale of the effects is not naturally the Planck scale.

I shall observe in Section 2.1.1 that the analysis of critical string theory actually has provided encouragement for the idea that it could also be a model of spacetime quantization, but the relevant aspects of critical string theory are still poorly understood and have not produced phenomenological proposals of the sort I am here reviewing. I do believe that it is likely that in a not-so-distant future some new opportunities for quantum-spacetime phenomenology will arise from this avenue.

1.11 Schematic outline of this review

The main objective of the next Section 2 is to motivate a list of candidate quantum-spacetime effects, on the basis of the structure of the quantum-gravity problem and/or of results obtained in certain theories that are being considered as relevant to the understanding of the quantum-gravity problem. The rest of this review attempts to describe the status of searches of these candidate quantum-spacetime effects.

Choosing what structure to give to Sections 3, 4, 5 and 6 was the main challenge faced by my work on this review. The option that finally prevailed attempts to assign each phenomenological proposal to a certain area of quantum-spacetime phenomenology. These should be viewed only as tentative assignments, or at least assignments based on a perception of what could be the primary targets of a given phenomenological proposal. And there are some visible limitations: some readers could legitimately argue that a certain subsection that I placed in one of the sections should instead find a more fitting setting in another section. Indeed, as I was working on this review, there were a few subsections that kept switching from one section to another. If used wisely, I feel that the structure I gave is still preferable to some of the alternatives that could have been considered. For example, even such a tentative structure of organization is probably going to be more easy to use than a long unstructured list of all the many phenomenological proposals I am considering. And the option of organizing phenomenological proposals on the basis of the theories that motivate them, rather than roughly on the basis of their primary area of relevance in phenomenology, would have been against the whole spirit of this review.

Section 3 focuses on effects that amount to Planck-scale departures from Lorentz/Poincaré symmetry, which is the type of effects on which the most energetic quantum-spacetime phenomenology effort has been so far directed. The content of Section 3 has some overlap with [395] that describes the status of modern tests of Lorentz symmetry, and, therefore, is in part also devoted to cases in which such tests are motivated by quantum-spacetime research. However, my perspective will be rather different, focused on the quantum-spacetime-motivated searches and also using the example of Lorentz/Poincaré-symmetry tests to comment on the level of maturity reached by quantum-spacetime phenomenology in relation to the falsification of (test) theories and to the discrimination between different but similar theories. And whereas from the broader viewpoint of probing the robustness of Lorentz symmetry one should consider as significant any proposal capable of improving the bounds established within a given parametrization of departures from Lorentz symmetry, I shall focus on the demands of Planck-scale sensitivity, as required by the objectives of research on Planck-scale quantization of spacetime, that is my main focus here.

In Section 4, I describe the status of other areas of quantum-spacetime phenomenology in which the Planck-scale also characterizes the onset of ultraviolet effects, but not of the types that require departures from Lorentz/Poincaré symmetry.

While the primary objectives of this review are the ultraviolet effects linked with the Planck-scale structure of spacetime, in Section 5 I briefly consider the possibility of ultraviolet/infrared (UV/IR) mixing. In such UV/IR-mixing scenarios the role of the Planck scale would be in governing the UV side, and possibly then combining with other scales when IR effects are considered.

Sections 3, 4, and 5 concern proposals of (only a few) controlled laboratory experiments and (several) observations in astrophysics. These are the contexts in which currently one finds more mature proposals, particularly concerning the robustness of claims of Planck-scale sensitivity of some relevant data analyses. However, observations in cosmology should also provide some very valuable opportunities, and there are some “quantum-spacetime-cosmology” proposals, to which I devote Section 6, that can already be used to expose the great potential reach of this type of analyses.

While different proposals of quantum-spacetime phenomenology often involve different formalizations and completely different experimental techniques, there is a common setup of all proposals described in Sections 3, 4, 5, and 6. This main strategy of quantum-spacetime phenomenology is summarized in Section 7, also pondering what might be some of its limitations.

Section 8 offers some closing remarks.

2 Quantum-Gravity Theories, Quantum Spacetime, and Candidate Effects

Before getting to the main task of this review, which concerns phenomenology proposals, it is useful to summarize briefly the motivation for studying certain candidate quantum-spacetime effects. The possible sources of motivation come either from analyses of the structure of the quantum-gravity problem or from what is emerging in the development of some theories that have been proposed as candidate solutions of the quantum-gravity problem. As already stressed, my main focus here is on effects that can be linked to spacetime quantization at (about) the Planck scale, and particularly the ones that were involved in the two-way interface that materialized over this last decade between phenomenologists and theorists working on the LQG approach and spacetime noncommutativity.

In the first part of this section, I offer a few comments on some of the approaches being pursued in the study of the quantum-gravity problem, mostly focusing on whether or not they support a quantum-spacetime picture and the role played by the Planck scale. This part focuses primarily on LQG and spacetime noncommutativity, but I also comment briefly on critical string theory and other approaches.

Then in the second part of this section I list some key candidates as phenomena that could characterize the quantum-spacetime realm. This list is only very tentative but it seems to me we cannot do any better than this at the present time. Indeed, compiling a list of candidate quantum-spacetime effects is not straightforward. Analogous situations in other areas of physics are usually such that there are a few new theories that have started to earn our trust by successfully describing some otherwise unexplained data, and then often we let those theories guide us toward new effects that should be looked for. The theories that are under consideration for the solution of the quantum-gravity problem, and for a “quantum” (non-classical) description of spacetime, cannot yet claim any success in the experimental realm. Moreover, even if nonetheless we wanted to use them as guidance for experiments, the complexity of these theories proves to be a formidable obstruction. In most cases, especially concerning testable predictions, the best we can presently do with these theories is analyze their general structure and use this as a source of intuition for the proposal of a few candidate effects. Similarly, when we motivate the search of certain quantum-spacetime features on the basis of our present understanding of the quantum-gravity problem we are in no way assured that they should still find support in future better insight on the nature of this problem, but it is the best we can do at the present time.

2.1 Quantum-Gravity Theories and Quantum Spacetime

2.1.1 Critical String Theory

The most studied approach to the quantum-gravity problem is a version of string theory that adopts supersymmetry and works in a “critical” number of spacetime dimensions. If this main-stream perspective turned out to be correct it would be bad news for quantum-spacetime phenomenologists, since the theory is formulated in classical Minkowski background spacetime. It would be bad news for phenomenology in general because (critical, supersymmetric) string theory is a particularly soft modification of current theories, and the new effects that can be accommodated by the theory are untestably small, if all the new features are indeed introduced (as traditionally assumed) at a string scale roughly given by the Planck scale.

String theory is the natural attempt from a particle-physics perspective, but other perspectives on the quantum-gravity problem remain unimpressed, particularly considering that most results of string theory still only apply in a fixed background Minkowski spacetime. And it is interesting to notice how the most careful analyses performed even adopting a string-theory perspective end up finding that the case for applicability to the quantum-gravity problem is still rather weak (see, e.g., Ref. [257]).

This not withstanding there has been in recent years a more vigorous effort of development of a string-inspired phenomenology, with inspiration found in mechanisms that are, however, outside the traditional formulation of string theory. This string-inspired phenomenology does not involve spacetime quantization and often does not refer explicitly to the Planck scale, so I shall not discuss it in detail in this review (although there will be scattered opportunities, at points of this review, where it becomes indirectly relevant). The possibility that received the most attention in recent years is the one of “large” extra dimensions [80, 375, 552, 84, 85, 480]. The existence of extra dimensions can be conceived even outside string theory, but it is noteworthy that in string theory the criticality criterion actually requires extra dimensions. If the extra dimensions, as traditionally assumed, have finite size on the order of the Planck length, then one ends up having associated Planck-scale effects for the low-energy realm, where our experiments and observations take place. This would be a classic exercise for quantum-gravity phenomenology but it appears that the Planck-scale suppression of these extra-dimension effects is so strong that they really could not ever be seen/tested. The recent interest in the “large extra dimensions” scenario originates from the observation that dimensions of size much larger than the Planck length (but still microscopic), while not particularly natural from a string-theory perspective, may well be allowed in string theory [80, 375, 552, 84, 85, 480]. And for some choices of number and sizes of extra dimensions a rich phenomenology is produced.

Most other phenomenological proposals inspired by string theory essentially make use of the fact that, at least as seen by a traditional particle physicist, string theory makes room for several new fields. The new effects are indeed of types that are naturally described by introducing new fields in a classical spacetime background, rather than quantum-spacetime features, and the magnitude of these effects is not naturally governed by the Planck scale.Footnote 9

In spite of these profound differences there are some points of contact between the Planck-scale quantum-spacetime phenomenology, which I am here concerned with, and this string phenomenology. In a quantum spacetime it is necessary to reexamine the issue of spacetime symmetries, and certain specific scenarios for the fate of Lorentz symmetry come into focus. From a different perspective and in a technically different way one also finds reasons to scrutinize Lorentz symmetry in string phenomenology: it is plausible [347] that some string-theory tensor fields (most likely some of the new fields introduced by the theory) could acquire a nonzero vacuum expectation value, in which case evidently one would have a “spontaneous breakdown” of Lorentz symmetry. I shall also comment on the possibility that spacetime quantization might affect the equivalence principle. Again, from a different perspective and in a technically different way, one also finds reasons to scrutinize the equivalence principle in string phenomenology. And again it is typically due to the extra fields introduced in string theory: most notably some scenarios involving the dilaton, a scalar partner to the graviton predicted by string theory, produce violations of the equivalence principle (see, e.g., Ref. [193]).

I should stress here, because of the scope of this review, that the idea of a quantum space-time is not completely foreign to string theory. It is presently appearing at an undigested and/or indirect level of analysis, but it is plausible that future evolutions of the string-theory program might have a primitive/fundamental role for spacetime quantization. So far the most studied connection with quantum-spacetime ideas comes from a mechanism analogous to the emergence of noncommutativity of position coordinates in the Landau model (see, e.g., Ref. [101]) that is found to be applicable to the description of strings in the presence of a constant Neveu-Schwarz two-form (“B μν ”) field [213, 516]. It should be stressed that these cases of “emerging noncommutativity” (effective descriptions applicable only in certain specific regimes) do not amount to genuine nonclassicality of spacetime. Still, these recent string-theory results do create a point of contact between research (and particularly phenomenology) on fundamental spacetime noncommutativity and string theory, with the peculiarity that from the string-theory perspective one would not necessarily focus (and typically there is no focus) on the case of noncommutativity introduced at about the Planck scale, since it is instead given in terms of the free specification of the field B μν .

For the hope of a possible future reformulation of string theory in some way that would accommodate a primitive role for spacetime nonclassicality my impression is that the key opportunities should be seen in results suggesting that there are fundamental limitations for the localization of a spacetime event in string theory [532, 269, 44, 332]. The significance of these results on limitations of localizability in string theory probably has not been appreciated sufficiently. Only a few authors have emphasized the possible significance of these results [551], but I would argue that finding such limitations in a theory originally formulated in a classical spacetime background may well provide the starting point for reformulating the theory completely, perhaps codifying spacetime quantization at a primitive level.

2.1.2 Loop Quantum Gravity

The most studied theory framework providing a quantum description of spacetime is LQG [476, 96, 502, 524, 93]. The intuition of many phenomenologists who have looked at (or actually worked on) LQG is that this theory should predict quite a few testable effects, some of which may well be testable with existing technologies. However, the complexity of the formalism has proven so far to be unmanageable from the point of view of obtaining crisp physical predictions. Among the many challenges I should at least mention the much debated “classical-limit problem”, which obstructs the way toward a definite set of predictions for the quasi-Minkowski (or quasi-deSitter, or quasi-FRW) regime, which is where most of the opportunities for phenomenology can be found.

However, one may attempt to infer from the general structure of the theory motivation for the study of some candidate LQG effects. And, as I shall stress in several parts of this review, this type of attitude has generated a healthy interface between phenomenologists and LQG theorists. Most of the relevant proposals are ignited indeed by the quantum properties of spacetime in LQG, which appear to be primarily codified in a discretization of the area and volume observables [477, 95, 476] In particular, several studies (see later in this review) have argued that the type of discretization of spacetime observables usually attributed to LQG could be responsibleFootnote 10 for Planck-scale departures from Lorentz symmetry.

In addition to a large effort focused on the fate of Lorentz symmetry, there has also been a rather large effort focused on early-Universe cosmology inspired by LQG. Among the appealing features of this cosmology work I should at least mention “singularity avoidance”. For the LQG approach, there might be no alternative to avoiding the big-bang singularity, since indeed, at least as presently understood, LQG describes spacetime has a fundamentally discrete structure governed by difference (rather than differential) equations. This discreteness is expected to become a dominant characteristic of the framework for processes involving comparably small (Planckian) length scales, and in particular it should inevitably give rise to a totally unconventional picture of the earliest stages of evolution of the Universe. Attempts at developing a setup for a quantitative description of these early-Universe features have been put forward in Refs. [125, 94, 126, 92] and references therein, but one must inevitably resort to rather drastic approximations, since a full LQG analysis is not possible at present.

For other areas of phenomenology discussed in this review the influence of LQG has been less direct, but it appears safe to assume that it will inevitably grow in the coming years. To give a particularly striking example, let me mention the many proposals here discussed that concern spacetime fuzziness. It is evident that LQG gives a fuzzy picture of spacetime (in the sense discussed more precisely in later parts of this review), and it would be of important guidance for the phenomenologists to have definite predictions for these features. Even just a semiheuristic derivation of such effects is beyond the reach of our present understanding of LQG, but it will come.

2.1.3 Approaches based on spacetime noncommutativity

The idea of having a nonclassical fundamental description of spacetime is central to the study of spacetime noncommutativity. The formalization that is most applied in the study of the quantum-gravity/quantum-spacetime problem is mainly based on the formalism of “quantum-groups” and essentially assumes that the quantum properties of spacetime should be at least to some extent analogous to the quantum properties of phase space in ordinary quantum mechanics. Ordinary quantum mechanics introduces some limitations for procedures intending to obtain a combined determination of both position and momentum, and this is formalized in terms of noncommutativity of the position and momentum observables. With spacetime noncommutativity one essentially assumes that spacetime coordinates should not commute [211, 391, 374, 384, 70, 98] among themselves, producing some limitations for the combined determination of more than one coordinate of a spacetime point/event. This has been the formalization of spacetime noncommutativity for which the two-way interface between theory and phenomenology, which is at center stage in this review, has been most significant.

Looking ahead at the future of quantum-spacetime phenomenology, it appears legitimate to hope that another, perhaps even more compelling, candidate concept of noncommutative geometry, the one championed by Connes [185, 184], may provide guidance. At present the most studied applications of this notion of noncommutative geometry are focused on giving a fully geometric description of the standard model of particle physics, with the noncommutativity of geometry used to codify known properties of particle physics in geometric fashion, while keeping spacetime as a classical geometry.

Going back to the quantum-group-based description of spacetime noncommutativity I should stress that, so far, the most significant developments have concerned attempts to describe the Minkowski limit of the quantum-gravity problem, i.e., a noncommutative version of Minkowski spacetime (spacetimes that reproduce classical Minkowski spacetime in the limit in which the noncommutativity parameters are taken to 0). Some related work has also been directed toward quantum versions of de Sitter spacetime, but very little about spacetime dynamics and only at barely an exploratory level. This should change in the future. But at the present time this situation could be described by stating that most work on spacetime noncommutatvity is considering only one half of the quantum-gravity problem, the quantum-spacetime aspects (neglecting the gravity aspects). Because of the double role of the gravitational field, which in some ways is just like another (e.g., electromagnatic) field given in spacetime but it is also the field that describes the structure of spacetime, in quantum-gravity research the idea that this classical field be replaced by a nonclassical one ends up amounting to two concepts: some sort of quantization of gravitational interactions (which might be mediated by a graviton) and some sort of quantization of spacetime structure. At present one might say that only within the LQG approach are we truly exploring both aspects of the problem. String theory, as long as it is formulated in a classical (background) spacetime, focuses in a sense on the quantization of the gravitational interaction, and sets aside (or will address in the future) the possible “quantization” of spacetime [551]. Spacetime noncommutativity is an avenue for exploring the implications of the other side, the quantization of spacetime geometry.

The description of (Minkowski-limit) spacetime in terms of (quantum-group-based) space-time noncommutativity has proven particularly valuable in providing intuition for the fate of (Minkowski-limit/Poincaré) spacetime symmetries at the Planck scale. Also parity transformations appear to be affected by at least some schemes of spacetime noncommutativity and this in turn provides motivation for testing CPT symmetry.

Unfortunately, spacetime fuzziness, which is the primary intuition that leads most researchers to noncommutativity, frustratingly remains only vaguely characterized in current research on non-commutative spacetimes; certainly not characterized with the sharpness needed for phenomenology.

2.1.4 Other proposals

I shall not attempt to review the overall status of quantum-gravity research. The challenge of reviewing and offering a perspective on quantum-spacetime phenomenology is already overwhelming. And according to the perspective of this phenomenological approach the central challenge of quantum-gravity research is to find the first experimental manifestations of the quantum-gravity realm. The different formalisms proposed for the study of the quantum-gravity problem can be very valuable for this objective, but only in as much as they provide intuition for the type of new effects that might characterize the quantum-gravity realm. In practice, at least for the next few decades, what will be compared to data will be simple test theories inspired by our understanding of the quantum-gravity problem or by the intuition obtained in the study of formal theories of quantum gravity. The possibility of comparing a full quantum-gravity theory directly to experiments appears to be for a still distant future, as a result of the complexity of these theories (which prevents us from deriving testable predictions).

I have invested a few pages on string theory, LQG and spacetime noncommutativity for different reasons. Providing some reasonably detailed comments on string theory was encouraged, in spite of the lack of a fundamental role for spacetime quantization, by its prominent role in the quantum-gravity literature. And, as stressed above, LQG and spacetime noncommutativity are particularly relevant for this review because the scenarios of spacetime quantization these approaches consider/derive have been a particularly influential source of intuition for proposals in quantum-spacetime phenomenology. Moreover, it is within the LQG and spacetime-noncommutativity communities that we have, so far, witnessed the most significant examples of the healthy two-way cross-influence between formal theory and phenomenology.

I shall not offer comparably detailed comments on any other quantum-gravity formalism, but there are a few that I should mention because of the significance of their role in quantum-spacetime phenomenology. First of all let me mention the noncritical “Liouville string theory” approach championed by Ellis, Mavromatos and Nanopoulos [221, 223, 65, 399]. This is a variant of the string-theory approach that (unlike the main-stream critical-string-theory approach) adopts the choice of working in “noncritical” number of spacetime dimensions, and describes time in a novel way. As will be evident in several points of this review, Ellis, Mavromatos, Nanopoulos and collaborators have developed noncritical Liouville string theory from a perspective that admirably keeps phenomenology always at center stage, and this has been a key influence on several quantum-spacetime-phenomenology research lines.

Another approach for which there is by now a rather sizable research program aimed at phenomenological consequences is the one based on “discrete causal sets” [131, 470]. This is an approach of spacetime discretization that exploits the fact that a Lorentzian metric determines both a geometry and a causal structure and also determines the metric up to a conformal factor. One can then take the causal structure as primary, and start with a finite set of points with a causal ordering, recovering the conformal factor by counting points. Several opportunities for phenomenology are then produced by the discretization of spacetime.

Still, on the subject of approaches in which a role is played by spacetime discretization I should also bring to the attention of my readers the recent developments in the study of causal dynamical triangulations [45, 371, 46, 47, 372, 49]. Through causal dynamical triangulations one gives an explicit, nonperturbative and background-independent, realization of the formal gravitational path integral on a given differential manifold. And some of the results obtained within this approach already provide elements of valuable intuition for quantum-spacetime phenomenology, as exemplified by the results providing [48] first evidence for a scale-dependent spectral dimension of spacetime, varying from four at large scales to two at scales on the order of the Planck length. These “running spectral dimensions” could have very significant applications in phenomenology, and early signs that this might indeed be the case can be found in the debate reported in Refs. [424, 505, 425] concerning the implication for primordial gravity waves.

Also particularly important for quantum-spacetime phenomenology is the program of asymptotically-safe quantum gravity. This is an attempt at the nonperturbative construction of a predictive quantum field theory of the metric tensor centered on the availability of a non-Gaussian renormalization-group fixed point [544, 466, 212]. There are a few perspectives from which this asymptotic-safety program is influencing part of the research on quantum-spacetime phenomenology. As an example of phenomenology work that was directly inspired by asymptotic safety, I should mention the expectation that quantum-gravity effects might also be important in a large-distance regime [469], with possible relevance for phenomenology. I shall comment on this later in this review, also in relation to the idea of “UV/IR mixing” as a possibility that appears to be plausible even within other perspectives on quantum gravity and quantum spacetime. And there are significant indications (see, e.g., Ref. [468]) that ultimately the description of spacetime in a quantum gravity with asymptotic safety will be a quantum-spacetime description. Also significant for quantum-spacetime phenomenology is the whole idea of running gravitational couplings, which is central to asymptotic safety. As mentioned we tentatively assume that quantum-spacetime effects originate at the Planck scale, but the Planck scale is computed in terms of (the IR value of) Newton’s constant and might give us a misleading intuition for the characteristic scales of spacetime quantization.

There are also some perspectives on the quantum-gravity problem that at present I do not see as direct opportunities for quantum-spacetime phenomenology, but certainly are playing the role of “intuition builders” for the phenomenologists, affecting the perception of the quantum-gravity problem that guides some of the relevant research. Among these I should mention the rather large literature on the “emergent gravity paradigm” (see, e.g., Refs. [103, 538, 443, 513, 555, 499, 297]). This literature actually contains a variety of possible way through which gravity could be described not as a fundamental aspect of the laws of nature, but rather as an emergent feature. A simple analogy here is with pion-mediated strong interactions, which emerge from the quantum chromodynamics of quarks and gluons at low energies.

And I should mention as another potential “intuition builder” for the phenomenologists a class of studies that in various ways place dissipation in connection with aspects of the quantum-gravity problem (see, e.g., Refs. [518, 296]).

2.2 Candidate effects

From the viewpoint of phenomenologists, the theory proposals I briefly considered in Section 2.1.4 (all still lacking any experimental success) can only serve the purpose of inspiring some test theories suitable for comparison to data.

In this Section, I will briefly motivate a partial list of possible classes of effects that could characterize the quantum-gravity/quantum-spacetime realm. And indeed in compiling such a list, one ends up using both intuition based on the general structure of the quantum-gravity problem and intuition based on what has been so far understood of theories that predict or assume spacetime quantization.

Both the analysis of the general structure of the quantum-gravity problem and the analysis of proposed approaches to the solution of the quantum-gravity problem provide a rather broad collection of intuitions for what might be the correct “quantization” of spacetime (see, e.g., Refs. [406, 532, 269, 44, 332, 442, 211, 20, 432, 50, 249, 489]), and in turn this variety of scenarios produces a rather broad collection of hypothesis concerning possible experimental manifestations of spacetime quantization.

2.2.1 Planck-scale departures from classical-spacetime symmetries

From a quantum-spacetime perspective it is natural to expect that some opportunities for phenomenology might come from tests of spacetime symmetries. It is relatively easy to test spacetime symmetries very sensitively, and it is natural to expect that introducing new (“quantum”) features in spacetime structure would affect the symmetries.

Let us consider in particular the Minkowski limit, the one described by the classical Minkowski spacetime in current theories: there is a duality one-to-one relation between the classical Minkowski spacetime and the classical (Lie-) algebra of Poincaré symmetry. Poincaré transformations are smooth arbitrary-magnitude classical transformations and it is, therefore, natural to subject them to scrutinyFootnote 11 if the classical Minkowski spacetime is replaced by a quantized/discretized version.

The most active quantum-spacetime-phenomenology research area is indeed the one considering possible Planck-scale departures from Poincaré/Lorentz symmetries. One possibility that has been considered in detail is the one of some symmetry-breaking mechanism affecting Poincaré/Lorentz symmetry. An alternative, which I advocated a few years ago [58, 55], is the one of a “spacetime quantization” that deforms but does not break some spacetime symmetries.

Besides the analysis of the general structure of the quantum-gravity problem, encouragement for these Poincaré/Lorentz-symmetry studies is also found within some of the most popular proposals for spacetime quantization. As mentioned, according to the present understanding of LQG, the fundamental description of spacetime involves some intrinsic discretization [476, 502], and, although very little of robust is presently known about the Minkowski limit of the theory, several indirect arguments suggest that this discretization should induce departures from classical Poincaré symmetry. While most of the LQG literature on the fate of Poincaré symmetries argues for symmetry violation (see, e.g., Refs. [247, 33]), there are some candidate mechanisms (see, e.g., Refs. [75, 237, 503]) that appear to provide opportunities for a deformation of symmetries in LQG.

A growing number of quantum-gravity researchers are also studying noncommutative versions of Minkowski spacetime, which are promising candidates as “quantum-gravity theories of not everything”, i.e., opportunities to get insight on some, but definitely not all, aspects of the quantum-gravity problem. For the most studied examples, canonical noncommutativity,

$$[{x_\mu},{x_\nu}] = i{\theta _{\mu \nu}},$$
(3)

and κ-Minkowski noncommutativity,

$$[{x_m},t] = {i \over \kappa}{x_m},\quad \;[{x_m},{x_l}] = 0,$$
(4)

the issues relevant for the fate of Poincaré symmetry are very much in focus, and departures from Poincaré symmetry appear to be inevitable.Footnote 12

2.2.2 Planck-scale departures from CPT symmetry

Arguments suggesting that CPT violation might arise in the quantum-gravity realm have a long tradition [279, 445, 540, 446, 42, 222, 298, 345, 117] (and also see, e.g., the more recent Refs. [21, 423, 330]). And, in light of the scope of this review, I should stress that specifically the idea of spacetime quantization invites one to place CPT symmetry under scrutiny. Indeed, locality (in addition to unitarity and Lorentz invariance) is a crucial ingredient for ensuring CPT invariance, and a common feature of all the proposals for spacetime quantization is the presence of limitations to locality, at least intended as limitations to the localizability of a spacetime event.

Unfortunately, a proper analysis of CPT symmetry requires a level of understanding of the formalism that is often beyond our present reach in the study of formalizations of the concept of quantum spacetime. In LQG one should have a good control of the Minkowski (classical-) limit, and of the description of charged particles in that limit, and this is still beyond what can presently be done within LQG.

Similar remarks apply to spacetime noncommutativity, although in that case some indirect arguments relevant for CPT symmetry can be meaningfully structured. For example, in Ref. [70] it is observed that certain spacetime noncommutativity scenarios appear to require a deformation of P (parity) transformations, which would result in a corresponding deformation of CPT transformations.

In the mentioned quantum-spacetime picture based on noncritical Liouville string theory [221, 224], evidence of violations of CPT symmetry has been reported [220], and later in this review I shall comment on the exciting phenomenology that was inspired by these results.

2.2.3 Decoherence and modifications of the Heisenberg principle

It is well established that the availability of a classical spacetime background has been instrumental to the successful tests of quantum mechanics so far performed. The applicability of quantum mechanics to a broader class of contexts remains an open experimental question. If indeed space-time is quantized there might be some associated departures from quantum mechanics. And this quantum-spacetime intuition fits well with a rather popular intuition for the broader context of quantum-gravity research, as discussed for example in Refs. [280, 361].

Some of the test theories used to model spacetime quantization have been found to provide motivation for departures from quantum mechanics in the form of “decoherence”, loss of quantum coherence [432, 50, 246]. A description of decoherence has been inspired by the mentioned noncritical Liouville string theory [221, 224], and is essentially the core feature of the formalism advocated by Percival and collaborators [452, 453, 454].

The possibility of modifications of the Heisenberg principle and of the de Broglie relation has also been much studied in accordance with the intuition that some aspects of quantum mechanics might need to be adapted to spacetime quantization. Although the details of the mechanism that produces such modifications vary significantly from one picture of spacetime quantization to another [322, 22, 122], one can develop an intuition of rather general applicability by noticing that the form of the de Broglie relation in ordinary quantum mechanics reflects the properties of the classical geometry of spacetime that is there assumed. More precisely, the de Broglie relation reflects the properties of the differential calculus on the spacetime manifold, since ordinary quantum mechanics describes the momentum observable in terms of a derivative operator (assuming the Heisenberg principle holds), which, acting on wave functions with wavelength λ, leads to the de Broglie relation p = h/λ. In a nonclassical (“quantum”) spacetime one must adopt new forms of differential calculus [500, 390], and as a result the description of the momentum observable and its relation to the wavelength of a wave must be reformulated [322, 22, 122, 63].

While the possibility of spacetime quantization provides a particularly direct logical line toward modifications of laws of quantum mechanics, one should consider such modifications as natural for the whole quantum-gravity problem (even when studied without assuming spacetime quantization). For example, in string theory, assuming the availability of a classical spacetime background, one finds some evidence of modification of the Heisenberg principle (the “Generalized Uncertainty Principle” discussed, e.g., in Refs. [532, 269, 44, 332, 551]).

2.2.4 Distance fuzziness and spacetime foam

A description that is often used to give some intuition for the effects induced by spacetime quantization is Wheeler’s “spacetime foam”, even though it does not amount to an operative definition. Most authors see it as motivation to look for formalizations of spacetime in which the distance between two events cannot be sharply determined, and the metric is correspondingly fuzzy. As I shall discuss in Section 4, a few attempts to operatively characterize the concept of spacetime foam and to introduce corresponding test theories have been recently developed. And a rather rich phenomenology is maturing from these proposals, often centered both on spacetime fuzziness per se and associated decoherence.

Unfortunately, very little guidance can be obtained from the most studied quantum-spacetime pictures. In LQG this type of experimentally tangible characterization of spacetime foam is not presently available. And remarkably even with spacetime noncommutativity, an idea that was mainly motivated by the spacetime-foam intuition of a nonclassical spacetime, we are presently unable to describe, for example, the fuzziness that would intervene in operating an interferometer with the type of crisp physical characterization needed for phenomenology.

2.2.5 Planck-scale departures from the equivalence principle

The possibility of violations of the equivalence principle has not been extensively studied from a quantum-spacetime perspective, in spite of the fact that spacetime quantization does provide some motivation for placing under scrutiny at least some implications of the equivalence principle. This is at least suggested by the observation that locality is a key ingredient of the present formulation of the equivalence principle: the equivalence principle ensures that (under appropriate conditions) two point particles would go on the same geodesic independent of their mass. But it is well established that this is not applicable to extended bodies, and presumably also not applicable to “delocalized point particles” (point particles whose position is affected by uncontrolled uncertainties). Presumably also the description of particles in a spacetime that is nonclassical (“quantized”), and, therefore, sets absolute limitations on the identification of a spacetime point, would require departures from some aspects of the equivalence principle.

Relatively few studies have been devoted to violations of the equivalence principle from a quantum-spacetime perspective. Examples are the study reported in Ref. [149], which obtained violations of the equivalence principle from quantum-spacetime-induced decoherence, the study based on noncritical Liouville string theory reported in Ref. [227], and the study based on metric fluctuations reported in Ref. [263].

Also the broader quantum-gravity literature (even without spacetime quantization) provides motivation for scrutinizing the equivalence principle. In particular, a strong phenomenology centered on violations of the equivalence principle was proposed in the string-theory-inspired studies reported in Refs. [521, 195, 196, 194, 193, 192] and references therein, which actually provide a description of violations of the equivalence principleFootnote 13 at a level that might soon be within our experimental reach.

Also relevant to this review is the possibility that violations of the equivalence principle might be a by-product of violations of Lorentz symmetry. In particular, this is suggested by the analysis in Ref. [338], where the gravitational couplings of matter are studied in the presence of Lorentz violation.

3 Quantum-Spacetime Phenomenology of UV Corrections to Lorentz Symmetry

The largest area of quantum-spacetime-phenomenology research concerns the fate of Lorentz (/Poincaré) symmetry at the Planck scale, focusing on the idea that the conjectured new effects might become manifest at low energies (the particle energies accessible to us, which are much below the Planck scale) in the form of “UV corrections”, correction terms with powers of energy in the numerator and powers of the Planck scale in the denominator.

Among the possible effects that might signal departures from Lorentz/Poincaré symmetry, the interest has been predominantly directed toward the study of the form of the energy-momentum (dispersion) relation. This was due both to the (relative) robustness of associated theory results in quantum-spacetime research and to the availability of very valuable opportunities of related data analyses. Indeed, as several examples in this section will show, over the last decade there were very significant improvements of the sensitivity of Lorentz- and Poincaré-symmetry tests.

Before discussing some actual phenomenologic al analyses, I find it appropriate to start this section with some preparatory work. This will include some comments on the “Minkowski limit of Quantum Gravity”, which I have already referred to but should be discussed a bit more carefully. And I shall also give a rather broad perspective on the quantum-spacetime implications for the set up of test theories suitable for the study of the fate of Lorentz/Poincaré symmetry at the Planck scale.

3.1 Some relevant concepts

3.1.1 The Minkowski limit

In our current conceptual framework Poincaré symmetry emerges in situations that allow the adoption of a Minkowski metric throughout. These situations could be described as the “classical Minkowski limit”.

It is not inconceivable that quantum gravity might admit a limit in which one can assume throughout a (expectation value of the) metric of Minkowski type, but some Planck-scale features of the fundamental description of spacetime (such as spacetime discreteness and/or spacetime noncommutativity) are still not completely negligible. This “nontrivial Minkowski limit” would be such that essentially the role of the Planck scale in the description of gravitational phenomena can be ignored (so that indeed one can make reference to a fixed Minkowski metric), but the possible role of the Planck scale in spacetime structure/kinematics is still significant. This intuition inspires the work on quantum-Minkowski spacetimes, and the analysis of the symmetries of these quantum spacetimes.

It is not obvious that the correct quantum gravity should admit such a nontrivial Minkowski limit. With the little we presently know about the quantum-gravity problem we must be open to the possibility that the Minkowski limit could actually be trivial, i.e., that whenever the role of the Planck scale in the description of gravitational phenomena can be neglected (and the metric is Minkowskian at least on average) one should also neglect the role of the Planck scale in spacetime structure. But the hypothesis of a nontrivial Minkowski limit is worth exploring: it is a plausible hypothesis and it would be extremely valuable for us if quantum gravity did admit such a limit, since it might open a wide range of opportunities for accessible experimental verification, as I shall stress in what follows.

When I mention a result on the theory side concerning the fate of Poincaré symmetry at the Planck scale clearly it must be the case that the authors have considered (or attempted to consider) the Minkowski limit of their preferred formalism.

3.1.2 Three perspectives on the fate of Lorentz symmetry at the Planck scale

It is fair to state that each quantum-gravity research line can be connected with one of three perspectives on the problem: the particle-physics perspective, the GR perspective and the condensed-matter perspective.

From a particle-physics perspective it is natural to attempt to reproduce as much as possible the successes of the Standard Model of particle physics. One is tempted to see gravity simply as one more gauge interaction. From this particle-physics perspective a natural solution of the quantum-gravity problem should have its core features described in terms of graviton-like exchange in a background classical spacetime. Indeed this structure is found in string theory, the most developed among the quantum-gravity approaches that originate from a particle-physics perspective.

The particle-physics perspective provides no a priori reasons to renounce Poincaré symmetry, since Minkowski classical spacetime is an admissible background spacetime, and in classical Minkowski there cannot be any a priori obstruction for classical Poincaré symmetry. Still, a breakdown of Lorentz symmetry, in the sense of spontaneous symmetry breaking, is possible, and this possibility has been studied extensively over the last few years, especially in string theory (see, e.g., Ref. [347, 213] and references therein).

Complementary to the particle-physics perspective is the GR perspective, whose core characteristic is the intuition that one should firmly reject the possibility of relying on a background spacetime [476, 502]. According to GR the evolution of particles and the structure of spacetime are self-consistently connected: rather than specify a spacetime arena (a spacetime background) beforehand, the dynamical equations determine at once both the spacetime structure and the evolution of particles. Although less publicized, there is also growing awareness of the fact that, in addition to the concept of background independence, the development of GR relied heavily on the careful consideration of the in-principle limitations that measurement procedures can encounter.Footnote 14 In light of the various arguments suggesting that, whenever both quantum mechanics and GR are taken into account, there should be an in-principle Planck-scale limitation to the localization of a spacetime point (an event), the GR perspective invites one to renounce any direct reference to a classical spacetime [211, 20, 432, 50, 249]. Indeed, this requirement that spacetime be described as fundamentally nonclassical (“fundamentally quantum”), so that the measurability limitations be reflected by a corresponding measurability-limited formalization of spacetime, is another element of intuition that is guiding quantum-gravity research from the GR perspective. This naturally leads one to consider discretized spacetimes, as in the LQG approach or noncommutative spacetimes.

Results obtained over the last few years indicate that this GR perspective naturally leads, through the emergence of spacetime discreteness and/or noncommutativity, to some departures from classical Poincaré symmetry. LQG and some other discretized-spacetime quantum-gravity approaches appear to require a description of the familiar (classical, continuous) Poincaré symmetry as an approximate symmetry, with departures governed by the Planck scale. And in the study of noncommutative spacetimes some Planck-scale departures from Poincaré symmetry appear to be inevitable.

The third possibility is a condensed-matter perspective on the quantum-gravity problem (see, e.g., Refs. [537, 358, 166]), in which spacetime itself is seen as a sort of emerging critical-point entity. Condensed-matter theories are used to describe the degrees of freedom that are measured in the laboratory as collective excitations within a theoretical framework, whose primary description is given in terms of much different, and often practically inaccessible, fundamental degrees of freedom. Close to a critical point some symmetries arise for the collective-excitation theory, which do not carry the significance of fundamental symmetries, and are, in fact, lost as soon as the theory is probed away from the critical point. Notably, some familiar systems are known to exhibit special-relativistic invariance in certain limits, even though, at a more fundamental level, they are described in terms of a nonrelativistic theory. So, from the condensed-matter perspective on the quantum-gravity problem it is natural to see the familiar classical continuous Poincaré symmetry only as an approximate symmetry.

Further encouragement for the idea of an emerging spacetime (though not necessarily invoking the condensed-matter perspective) comes from the realization [304, 533, 444] that the Einstein equations can be viewed as an equation of state, so in some sense thermodynamics implies GR and the associated microscopic theory might not look much like gravity.

3.1.3 Aside on broken versus deformed spacetime symmetries

If the fate of Poincaré symmetry at the Planck scale is nontrivial, the simplest possibility is the one of broken Poincaré symmetry, in the same sense that other symmetries are broken in physics. As mentioned, an example of a suitable mechanism is provided by the possibility that a tensor field might have a vacuum expectation value [347].

An alternative possibility, that in recent years has attracted the interest of a growing number of researchers within the quantum-spacetime and the quantum-gravity communities, is the one of deformed (rather than broken) spacetime symmetries, in the sense of the “doubly-special-relativity” (DSR) proposal I put forward a few years ago [58]. I have elsewhere [63] attempted to expose the compellingness of this possibility. Still, because of the purposes of this review, I must take into account that the development of phenomenologically-viable DSR models is still in its infancy. In particular, several authors (see, e.g., Refs. [56, 493, 202, 292]) have highlighted the challenges for the description of spacetime and in particular spacetime locality that inevitably arise when contemplating a DSR scenario. I am confident that some of the most recent DSR studies, particularly those centered on the analysis of the “relative locality” [71, 504, 88, 67], contain the core ideas that in due time will allow us to fully establish a robust DSR picture of spacetime, but I nonetheless feel that we are still far from the possibility of developing a robust DSR phenomenology.

Interested readers have available a rather sizable DSR literature (see, e.g., Ref. [58, 55, 349, 140, 386, 387, 354, 388, 352, 353, 350, 26, 200, 493, 465, 291, 314, 366] and references therein), but for the purposes of this review I shall limit my consideration of DSR ideas on phenomenology to a single one of the (many) relevant issues, which is an observation that concerns the compatibility between modifications of the energy-momentum dispersion relation and modifications of the law of conservation of energy-momentum. My main task in this Section is to illustrate the differences (in relation to this compatibility issue) between the broken-symmetry hypothesis and the DSR-deformed-symmetry hypothesis.

The DSR scenario was proposed [58] as a sort of alternative perspective on the results on Planck-scale departures from Lorentz symmetry that had been reported in numerous articles [66, 247, 327, 38, 73, 463, 33] between 1997 and 2000. These studies were advocating a Planck-scale modification of the energy-momentum dispersion relation, usually of the form \({E^2} = {p^2} + {m^2} + \eta L_p^n{p^2}{E^n} + O(L_p^{n + 1}{E^{n + 3}})\), on the basis of preliminary findings in the analysis of several formalisms in use for Planck-scale physics. The complexity of the formalisms is such that very little else was known about their physical consequences, but the evidence of a modification of the dispersion relation was becoming robust. In all of the relevant papers it was assumed that such modifications of the dispersion relation would amount to a breakdown of Lorentz symmetry, with associated emergence of a preferred class of inertial observers (usually identified with the natural observer of the cosmic microwave background radiation).

However, it then turned out to be possible [58] to avoid this preferred-frame expectation, following a line of analysis in many ways analogous to the one familiar from the developments that led to the emergence of special relativity (SR), now more than a century ago. In Galileian relativity there is no observer-independent scale, and in fact the energy-momentum relation is written as E = p2/(2m). As experimental evidence in favor of Maxwell’s equations started to grow, the fact that those equations involve a fundamental velocity scale appeared to require the introduction of a preferred class of inertial observers. But in the end we discovered that the situation was not demanding the introduction of a preferred frame, but rather a modification of the laws of transformation between inertial observers. Einstein’s SR introduced the first observer-independent relativistic scale (the velocity scale c), its dispersion relation takes the form E2 = c2p2 + c4m2 (in which c plays a crucial role in relation to dimensional analysis), and the presence of c in Maxwell’s equations is now understood as a manifestation of the necessity to deform the Galilei transformations.

It is plausible that we might be presently confronted with an analogous scenario. Research in quantum gravity is increasingly providing reasons for interest in Planck-scale modifications of the dispersion relation, and, while it was customary to assume that this would amount to the introduction of a preferred class of inertial frames (a “quantum-gravity ether”), the proper description of these new structures might require yet again a modification of the laws of transformation between inertial observers. The new transformation laws would have to be characterized by two scales (c and λ) rather than the single one (c) of ordinary SR.

While the DSR idea came to be 0proposed in the context of studies of modifications of the dispersion relation, one could have other uses for the second relativistic scale, as stressed in parts of the DSR literature [58, 55, 349, 140, 386, 387, 354, 388, 352, 353, 350, 26, 200, 493, 465, 291, 314, 366]. Instead of promoting to the status of relativistic invariant a modified dispersion relation, one can have DSR scenarios with undeformed dispersion relations but, for example, with an observer-independent bound on the accuracy achievable in the measurement of distances [63]. However, as announced, within the confines of this quantum-spacetime-phenomenology review I shall only make use of one DSR argument, that applies to cases in which indeed the dispersion relation is modified. This concerns the fact that in the presence of observer-independent modifications of the dispersion relation (DSR-)relativistic invariance imposes the presence of associated modifications of the law of energy-momentum conservation. More general discussions of this issue are offered in Refs. [58, 63], but it is here sufficient to illustrate it in a specific example. Let us then consider a dispersion relation whose leading-order deformation (by a length scale λ) is given by

$${E^2} \simeq {\vec p^2} + {m^2} + \lambda {\vec p^2}E.$$
(5)

This dispersion relation is clearly an invariant of classical space rotations, and of deformed boost transformations generated by [58, 63]

$${\mathcal{B}_j} \simeq i{p_j}{\partial \over {\partial E}} + i\left({E + {\lambda \over 2}{{\vec p}^2} + \lambda {E^2}} \right){\partial \over {\partial {p_j}}} - i\lambda {p_j}\left({{p_k}{\partial \over {\partial {p_k}}}} \right).$$
(6)

The issue concerning energy-momentum conservation arises because both the dispersion relation and the law of energy-momentum conservation must be (DSR-)relativistic. And the boosts (6), which enforce relativistically the modification of the dispersion relation, are incompatible with the standard form of energy-momentum conservation. For example, for processes with two incoming particles, a and b, and two outgoing particles, c and d, the requirements E a + E b E c E d = 0 and p a + p b p c pd = 0 are not observer-independent laws according to (6). An example of a modification of energy-momentum conservation that is compatible with (6) is [58]

$${E_a} + {E_b} + \lambda {p_a}{p_b} \simeq {E_c} + {E_d} + \lambda {p_c}{p_d},$$
(7)
$${p_a} + {p_b} + \lambda ({E_a}{p_b} + {E_b}{p_a}) \simeq {p_c} + {p_d} + \lambda ({E_c}{p_d} + {E_d}{p_c}).$$
(8)

And analogous formulas can be given for any process with n incoming particles and m outgoing particles. In particular, in the case of a two-body particle decay ab + c the laws

$${E_a} \simeq {E_b} + {E_c} + \lambda {p_b}{p_c},$$
(9)
$${p_a} \simeq {p_b} + {p_c} + \lambda ({E_b}{p_c} + {E_c}{p_b})$$
(10)

provide an acceptable (observer-independent, covariant according to (6)) possibility.

This observation provides a general motivation for contemplating modifications of the law of energy-momentum conservation in frameworks with modified dispersion relations. And I shall often test the potential impact on the phenomenology of introducing such modifications of the conservation of energy-momentum by using as examples DSR-inspired laws of the type (7), (8), (9), (10). I shall do this without necessarily advocating a DSR interpretation: knowing whether or not the outcome of tests of modifications of the dispersion relation depends on the possibility of also having a modification of the momentum-conservation laws is of intrinsic interest, with or without the DSR intuition. But I must stress that when the relativistic symmetries are broken (rather than deformed in the DSR sense) there is no a priori reason to modify the law of energy-momentum conservation, even when the dispersion relation is modified. Indeed most authors adopting modified dispersion relations within a broken-symmetry scenario keep the law of energy-momentum conservation undeformed.

On the other hand the DSR research program has still not reached the maturity for providing a fully satisfactory interpretation of the nonlinearities in the conservation laws. For some time the main challenge came (in addition to the mentioned interpretational challenges connected with spacetime locality) from arguments suggesting that one might well replace a given nonlinear setup for a DSR model with one obtained by redefining nonlinearly the coordinatization of momentum space (see, e.g., Ref. [26]). When contemplating such changes of coordinatization of momentum space many interpretational challenges appeared to arise. In my opinion, also in this direction the recent DSR literature has made significant progress, by casting the nonlinearities for momentum-space properties in terms of geometric entities, such as the metric and the affine connection on momentum space (see, e.g., Ref. [67]). This novel geometric interpretation is offering several opportunities for addressing the interpretational challenges, but the process is still far from complete.

3.2 Preliminaries on test theories with modified dispersion relation

So far the main focus of Poincaré-symmetry tests planned from a quantum-spacetime-phenomenology perspective has been on the form of the energy-momentum dispersion relation. Indeed, certain analyses of formalisms provide encouragement for the possibility that the Minkowski limit of quantum gravity might indeed be characterized by modified dispersion relations. However, the complexity of the formalisms that motivate the study of Planck-scale modifications of the dispersion relation is such that one has only partial information on the form of the correction terms and actually one does not even establish robustly the presence of modifications of the dispersion relation. Still, in some cases, most notably within some LQG studies and some studies of noncommutative space-times, the “theoretical evidence” in favor of modifications of the dispersion relations appears to be rather robust.

This is exactly the type of situation that I mentioned earlier in this review as part of a preliminary characterization of the peculiar type of test theories that must at present be used in quantum-spacetime phenomenology. It is not possible to compare to data the predictions for departures from Poincaré symmetry of LQG and/or noncommutative geometry because these theories do not yet provide a sufficiently rich description of the structures needed for actually doing phenomenology with modified dispersion relations. What we can compare to data are some simple models inspired by the little we believe we understand of the relevant issues within the theories that provide motivation for this phenomenology.

And the development of such models requires a delicate balancing act. If we only provide them with the structures we do understand of the original theories they will be as sterile as the original theories. So, we must add some structure, make some assumptions, but do so with prudence, limiting as much as possible the risk of assuming properties that could turn out not to be verified once we understand the relevant formalisms better.

As this description should suggest, there has been a proliferation of models adopted by different authors, each reflecting a different intuition on what could or could not be assumed. Correspondingly, in order to make a serious overall assessment of the experimental limits so far established with quantum-spacetime phenomenology of modified dispersion relations, one should consider a huge zoo of parameters. Even the parameters of the same parametrization of modifications of the dispersion relation when analyzed using different assumptions about other aspects of the model should really be treated as different/independent sets of parameters.

I shall be satisfied with considering some illustrative examples of models, chosen in such a way as to represent possibilities that are qualitatively very different, and representative of the breadth of possibilities that are under consideration. These examples of models will then be used in some relevant parts of this review as “language” for the description of the sensitivity to Planck-scale effects that is within the reach of certain experimental analyses.

3.2.1 With or without standard quantum field theory?

Before describing actual test theories, I should at least discuss the most significant among the issues that must be considered in setting up any such test theory with modified dispersion relation. This concerns the choice of whether or not to assume that the test theory should be a standard low-energy effective quantum field theory.

A significant portion of the quantum-gravity and quantum-spacetime community is rather skeptical of the results obtained using low-energy effective field theory in analyses relevant to the Planck-scale regime. One of the key reasons for this skepticism is the description given by effective field theory of the cosmological constant. The cosmological constant is the most significant experimental fact of evident gravitational relevance that could be within the reach of effective field theory. And current approaches to deriving the cosmological constant within effective field theory produce results, which are some 120 orders of magnitude greater than allowed by observations.Footnote 15

However, just like there are several researchers who are skeptical about any results obtained using low-energy effective field theory in analyses relevant for the quantum-gravity/quantum-spacetime regime, there are also quite a few researchers who feel that it should be ok to assume a description in terms of effective field theory for all low-energy (sub-Planckian) manifestations of the quantum-gravity/quantum-spacetime regime.

Adopting a strict phenomenologist viewpoint, perhaps the most important observation is that for several of the effects discussed in this section on UV corrections to Lorentz symmetry, and for some of the effects discussed in later sections, studies based on effective quantum field theory can only be performed with a rather strongly “pragmatic” attitude. One would like to confine the new effects to unexplored high-energy regimes, by adjusting bare parameters accordingly, but, as I shall stress again later, quantum corrections produce [455, 182, 515, 190] effects that are nonetheless significant at accessible low energies, unless one allows for rather severe fine-tuning. On the other hand, we do not have enough clues concerning setups alternative to quantum-field theory that could be used. For example, as I discuss in detail later, some attempts are centered on density-matrix formalisms that go beyond quantum mechanics, but those are (however legitimate) mere speculations at the present time. Nonetheless several of the phenomenologists involved, myself included, feel that in such a situation phenomenology cannot be stopped by the theory impasse, even at the risk of later discovering that the whole (or a sizable part of) the phenomenological effort was not on sound conceptual bases.

But I stress that even when contemplating the possibility of physics outside the domain of effective quantum field theory, one inevitably must at least come to terms with the success of effective field theory in reproducing a vast class of experimental data. In this respect, at least for studies of Planck-scale departures from classical-spacetime relativistic symmetries I find particularly intriguing a potential “order-of-limits issue”. The effective-field-theory description might be applicable only in reference frames in which the process of interest is essentially occurring in its center of mass (no “Planck-large boost” [60] with respect to the center-of-mass frame). The field theoretic description could emerge in a sort of “low-boost limit”, rather than the expected low-energy limit. The regime of low boosts with respect to the center-of-mass frame is often indistinguishable from the low-energy limit. For example, from a Planck-scale perspective, our laboratory experiments (even the ones conducted at, e.g., CERN, DESY, SLAC, …) are both low boost (with respect to the center-of-mass frame) and low energy. However, some contexts that are of interest in quantum-gravity phenomenology, such as the collisions between ultra-high-energy cosmic-ray protons and CMBR photons, are situations where all the energies of the particles are still tiny with respect to the Planck energy scale, but the boost with respect to the center-of-mass frame could be considered to be “large” from a Planck-scale perspective: the Lorentz factor γ with respect to the proton rest frame is much greater than the ratio between the Planck scale and the proton mass

$$\gamma = E/{m_{{\rm{proton}}}} \gg {E_p}/E.$$
(11)

Another interesting scenario concerning the nature of the limit through which quantum-spacetime physics should reproduce ordinary physics is suggested by results on field theories in noncommutative spacetimes. One can observe that a spacetime characterized by an uncertainty relation of the type

$$\delta x\delta y \geq \theta (x,y)$$
(12)

never really behaves as a classical spacetime, not even at very low energies. In fact, according to this type of uncertainty relation, a low-energy process involving soft momentum exchange in the x direction (large δx) should somehow be connected to the exchange of a hard momentum in the y direction (δyθ/δx), and this feature cannot faithfully be captured by our ordinary field-theory formalisms. For the “canonical noncommutative spacetimes” one does obtain a plausible-looking field theory [213], but the results actually show that it is not possible to rely on an ordinary effective low-energy quantum-field-theory description because of the presence of “UV/IR mixing” [213, 397] (a mechanism such that the high-energy sector of the theory does not decouple from the low-energy sector, which in turn very severely affects the prospects of analyses based on an ordinary effective low-energy quantum-field-theory description). For other (non-canonical) noncommutative spacetimes we are still struggling in the search for a satisfactory formulation of a quantum field theory [335, 64], and it is at this point legitimate to worry that such a formulation of dynamics in those spacetimes does not exist.

And the assumption of availability of an ordinary effective low-energy quantum-field-theory description has also been challenged by some perspectives on the LQG approach. For example, the arguments presented in Ref. [245] suggest that in several contexts in which one would naively expect a low-energy field theory description LQG might instead require a density-matrix description with features going beyond the reach of effective quantum field theory.

3.2.2 Other key features of test theories with modified dispersion relation

In order to be applicable to a significant ensemble of experimental contexts, a test theory should specify much more than the form of the dispersion relation. In light of the type of data that we expect to have access to (see later, e.g., Sections 3.4, 3.5, and 3.8), besides the choice of working within or without low-energy effective quantum field theory, there are at least three other issues that the formulation of such a test theory should clearly address:

  1. (i)

    is the modification of the dispersion relation “universal”? or should one instead allow different modification parameters for different particles?

  2. (ii)

    in the presence of a modified dispersion relation between the energy E and the momentum p of a particle, should we still assume the validity of the relation υ = dE/dp between the speed of a particle and its dispersion relation?

  3. (iii)

    in the presence of a modified dispersion relation, should we still assume the validity of the standard law of energy-momentum conservation?

Unfortunately on these three key points, the quantum-spacetime pictures that are providing motivation for the study of Planck-scale modifications of the dispersion relation are not giving us much guidance yet.

For example, in LQG, while we do have some (however fragile and indirect) evidence that the dispersion relation should be modified, we do not yet have a clear indication concerning whether the law of energy-momentum conservation should also be modified and we also cannot yet establish whether the relation υ = dE/dp should be preserved.

Similarly, in the analysis of noncommutative spacetimes we are close to establishing rather robustly the presence of modifications of the dispersion relation, but other aspects of the relevant theories have not yet been clarified. While most of the literature for canonical noncommutative spacetimes assumes [213, 397] that the law of energy-momentum conservation should not be modified, most of the literature on κ-Minkowski spacetime argues in favor of a modification of the law of energy-momentum conservation. There is also still no consensus on the relation between speed and dispersion, and particularly in the κ-Minkowski literature some departures from the υ = dE/dp relation are actively considered [336, 414, 199, 351]. And at least for canonical noncommutative spacetimes the possibility of a nonuniversal dispersion relation is considered extensively [213, 397].

Concerning the relation υ = dE/dp it may be useful to stress that it can be obtained assuming that a Hamiltonian description is still available, υ = dx/dt ∼ [x, H(p)], and that the Heisenberg uncertainty principle still holds exactly ([x,p] = 1 → x∂/∂p). The possibility of modifications of the Hamiltonian description is an aspect of the debate on “Planck-scale dynamics” that was in part discussed in Section 3.2.1. And concerning the Heisenberg uncertainty principle I have already mentioned some arguments that invite us to contemplate modifications.

3.2.3 A test theory for pure kinematics

With so many possible alternative ingredients to mix one can of produce a large variety of test theories. As mentioned, I intend to focus on some illustrative examples of test theories for my characterization of achievable experimental sensitivities.

My first example is a test theory of very limited scope, since it is conceived to only describe pure-kinematics effects. This will strongly restrict the class of experiments that can be analyzed in terms of this test theory, but the advantage is that the limits obtained on the parameters of this test theory will have rather wide applicability (they will apply to any quantum-spacetime theory with that form of kinematics, independent of the description of dynamics).

The first element of this test theory, introduced from a quantum-spacetime-phenomenology perspective in Refs. [66, 65], is a “universal” (same for all particles) dispersion relation of the form

$${m^2} \simeq {E^2} - {\vec p^2} + \eta {\vec p^2}\left({{{{E^n}} \over {E_p^n}}} \right),$$
(13)

with real η of order 1 and integer n (> 0). This formula is compatible with some of the results obtained in the LQG approach and reflects some results obtained for theories in κ-Minkowski noncommutative spacetime.

Already in the first studies [66] that proposed a phenomenology based on (13) it was assumed that even at the Planck scale the familiar description of “group velocity”, obtained from the dispersion relation according to υ = dE/dp, would hold.

And in other early phenomenology works [327, 38, 73, 463] based on (13) it was assumed that the law of energy-momentum conservation should not be modified at the Planck scale, so that, for example, ina a + bc + d particle-physics process one would have

$${E_a} + {E_b} = {E_c} + {E_d},$$
(14)
$${\vec p_a} + {\vec p_b} = {\vec p_c} + {\vec p_d}.$$
(15)

In the following, I will refer to this test theory as the “PKV0 test theory”, where “PK” reflects its “Pure-Kinematics” nature, “V” reflects its “Lorentz-symmetry Violation” content, and “0” reflects the fact that it combines the dispersion relation (13) with what appears to be the most elementary set of assumptions concerning other key aspects of the physics: universality of the dispersion relation, υ = dE/dp, and the unmodified law of energy-momentum conservation.

This rudimentary framework is a good starting point for exploring the relevant phenomenology. But one should also consider some of the possible variants. For example, the undeformed conservation of energy-momentum is relativistically incompatible with the deformation of the dispersion relation (so, in particular, the PKV0 test theory requires a preferred frame). Modifications of the law of energy-momentum conservation would be required in a DSR picture, and may be considered even in other scenarios.Footnote 16

Evidently, the universality of the effect can and should be challenged. And there are indeed (as I shall stress again later in this review) several proposals of test theories with different magnitudes of the effects for different particles [395, 308]. Let me just mention, in closing this section, a case that is particularly challenging for phenomenology: the case of the variant of the PKV0 test theory allowing for nonuniversality such that the effects are restricted only to photons [227, 74], thereby limiting significantly the class of observations/experiments that could test the scenario (see, however, Ref. [380]).

3.2.4 A test theory based on low-energy effective field theory

The restriction to pure kinematics has the merit to allow us to establish constraints that are applicable to a relatively large class of quantum-spacetime scenarios (different formulations of dynamics would still be subject to the relevant constraints), but it also severely restricts the type of experimental contexts that can be considered, since it is only in rare instances (and only to some extent) that one can qualify an analysis as purely kinematical. Therefore, the desire to be able to analyze a wider class of experimental contexts is, therefore, providing motivation for the development of test theories more ambitious than the PKV0 test theory, with at least some elements of dynamics. This is rather reasonable, as long as one proceeds with awareness of the fact that, in light of the situation on the theory side, for test theories adopting a given description of dynamics there is a risk that we may eventually find out that none of the quantum-gravity approaches that are being pursued are reflected in the test theory.

When planning to devise a test theory that includes the possibility to describe dynamics, the first natural candidate (not withstanding the concerns reviewed in Section 3.2.1) is the framework of low-energy effective quantum field theory. In this section I want to discuss a test theory that is indeed based on low-energy effective field theory, and has emerged primarilyFootnote 17 from the analysis reported by Myers and Pospelov in Ref. [426]. Motivated mainly by the perspective of LQG advocated in Ref. [247], this test theory explores the possibility of a linear-in-L p modification of the dispersion relation

$${m^2} \simeq {E^2} - {\vec p^2} + \eta {\vec p^2}{L_p}E,$$
(16)

i.e., the case n = 1 of Eq. (13). Perhaps the most notable outcome of the exercise of introducing such a dispersion relation within an effective low-energy field-theory setup is the observation [426] that for the case of electromagnetic radiation, assuming essentially only that the effects are characterized mainly by an external four-vector, one arrives at a single possible correction term for the Lagrangian density:

$$\mathcal{L} = - {1 \over 4}{F_{\mu \nu}}{F^{\mu \nu}} + {1 \over {2{E_p}}}{\eta ^\alpha}{F_{\alpha \delta}}{n^\sigma}{\partial _\sigma}({n_\beta}{\varepsilon ^{\beta \delta \gamma \lambda}}{F_{\gamma \lambda}}),$$
(17)

where the four-vector n α parameterizes the effect.

This is also a framework for broken Lorentz symmetry, since the (dimensionless) components of n α take different values in different reference frames, transforming as the components of a four-vector. And a full-scope phenomenology for this proposal should explore [271] the four-dimensional parameter space, n α , taking into account the characteristic frame dependence of the parameters n α . As I discuss in later parts of this section, there is already a rather sizable literature on this phenomenology, but still mainly focused on what turns out to be the simplest possibility for the Myers-Pospelov framework, which relies on the assumption that one is in a reference frame where n α only has a time component, n α = (n0, 0,0,0). Then, upon introducing the convenient notation ξ ≡ (n0)3, one can rewrite (17) as

$${\mathcal L} = - {1 \over 4}{F_{\mu \nu}}{F^{\mu \nu}} + {\xi \over {2{E_p}}}{\varepsilon ^{jkl}}{F_{0j}}{\partial _0}{F_{kl}},$$
(18)

and in particular one can exploit the simplifications provided by spatial isotropy. And a key feature that arises is birefringence: within this setup it turns out that when right-circular polarized photons satisfy the dispersion relation E2p2 + η γ p3, then necessarily left-circular polarized photons satisfy the “opposite sign” dispersion relation E2p2η γ p3.

In the same spirit one can add spin-1/2 particles to the model, but for them the structure of the framework does not introduce constraints on the parameters, and in particular there can be two independent parameters η+ and η to characterize the modification of the dispersion relation for fermions of different helicity:

$${m^2} \simeq {E^2} - {\vec p^2} + {\eta _ +}{\vec p^2}\left({{E \over {{E_p}}}} \right),$$
(19)

in the positive-helicity case, and

$${m^2} \simeq {E^2} - {\vec p^2} + {\eta _ -}{\vec p^2}\left({{E \over {{E_p}}}} \right),$$
(20)

in the negative-helicity case. The formalism is compatible with the possibility of introducing further independent parameters for each additional fermion in the theory (so that, e.g., protons would have different values of η+ and η with respect to electrons). And there is no constraint on the relation between η+ and η, but the consistency of the framework requires [308] that for particle-antiparticle pairs, the deformation should have opposite signs on opposite helicities, so that, for example, \(\eta _ + ^{({\rm{electron}})} = - \eta _ - ^{({\rm{positron}})}\;{\rm{and}}\;\eta _ - ^{({\rm{electron}})} = - \eta _ + ^{({\rm{positron}})}\).

In some investigations one might prefer to look at particularly meaningful portions of this large parameter space. For example, one might consider [62] the possibility that the deformation for all spin-1/2 particles be characterized by only two parameters, the same two parameters for all particle-antipartic le pairs (leaving open, however, some possible sign ambiguities to accommodate the possibility to choose between, for example, \(\eta _ + ^{({\rm{muon}})} = \eta _ + ^{({\rm{electron}})} = - \eta _ - ^{({\rm{positron}})}\;{\rm{and}}\;\eta _ + ^{({\rm{muon}})} = \eta _ + ^{({\rm{positron}})} = - \eta _ - ^{({\rm{electron}})}\). In the following I will refer to this test theory as the “FTV0 test theory”, where “FT” reflects its adoption of a “low-energy effective Field Theory” description, “V” reflects its “Lorentz-symmetry Violation” content, and “0” reflects the “minimalistic” assumption of “universality for spin-1/2 particles”.

3.2.5 More on “pure-kinematics” and “field-theory-based” phenomenology

Before starting my characterization of experimental sensitivities in terms of the parameters of some test theories I find it appropriate to add a few remarks warning about some difficulties that are inevitably encountered.

For the pure-kinematics test theories, some key difficulties originate from the fact that sometimes an effect due to the modification of dynamics can take a form that is not easily distinguished from a pure-kinematics effect. And other times one deals with an analysis of effects that appear to be exclusively sensitive to kinematics but then at the stage of converting experimental results into bounds on parameters some level of dependence on dynamics arises. An example of this latter possibility will be provided by my description of particle-decay thresholds in test theories that violate Lorentz symmetry. The derivation of the equations that characterize the threshold requires only the knowledge of the laws of kinematics. And if, according to the kinematics of a given test theory, a certain particle at a certain energy cannot decay, then observation of the decay allows one to set robust pure-kinematics limits on the parameters. But if the test theory predicts that a certain particle at a certain energy can decay then by not finding such decays we are not in a position to truly establish pure-kinematics limits on the parameters of the test theory. If the decay is kinematically allowed but not seen, it is possible that the laws of dynamics prevent it from occurring (small decay amplitude).

By adopting a low-energy quantum field theory this type of limitations is removed, but other issues must be taken into account, particularly in association with the fact that the FTV0 quantum field theory is not renormalizable. Quantum-field-theory-based descriptions of Planck-scale departures from Lorentz symmetry can only be developed with a rather strongly “pragmatic” attitude. In particular, for the FTV0 test theory, with its Planck-scale suppressed effects at tree level, some authors (notably Refs. [455, 182, 515, 190]) have argued that the loop expansion could effectively generate additional terms of modification of the dispersion relation that are unsuppressed by the cut-off scale of the (nonrenormalizable) field theory. The parameters of the field theory can be fine-tuned to eliminate unwanted large effects, but the needed level of fine tuning is usually rather unpleasant. While certainly undesirable, this severe fine-tuning problem should not discourage us from considering the FTV0 test theory, at least not at this early stage of the development of the relevant phenomenology. Actually some of the most successful theories used in fundamental physics are affected by severe fine tuning. It is not uncommon to eventually discover that the fine tuning is only apparent, and some hidden symmetry is actually “naturally” setting up the hierarchy of parameters.

In particular, it is already established that supersymmetry can tame the fine-tuning issue [268, 130]. If one extends supersymmetric quantum electrodynamics by adding interactions with external vector and tensor backgrounds that violate Lorentz symmetry at the Planck scale, then exact supersymmetry requires that such interactions correspond to operators of dimension five or higher, so that no fine-tuning is needed in order to suppress the unwanted operators of dimension lower than five. Supersymmetry can only be an approximate symmetry of the physical world, and the effects of the scale of soft-supersymmetry-breaking masses controls the renormalization-group evolution of dimension five Lorentz-violating operators and their mixing with dimension three Lorentz-violating operators [268, 130].

It has also been established [461] that if Lorentz violation occurs in the gravitational sector, then the violations of Lorentz symmetry induced on the matter sector do not require severe fine-tuning. In particular, this has been investigated by coupling the Standard Model of particle physics to a Hořava-Lifshitz description of gravitational phenomena.

The study of Planck-scale departures from Lorentz symmetry may find some encouragement in perspectives based on renormalization theory, at least in as much as it has been shown [79, 78, 289, 507] that some field theories modified by Lorentz-violating terms are actually rather well behaved in the UV.

3.3 Photon stability

3.3.1 Photon stability and modified dispersion relations

The first example of Planck-scale sensitivity that I discuss is the case of a process that is kinematically forbidden in the presence of exact Lorentz symmetry, but becomes kinematically allowed in the presence of certain departures from Lorentz symmetry. It has been established (see, e.g., Refs. [305, 59, 334, 115]) that when Lorentz symmetry is broken at the Planck scale, there can be significant implications for certain decay processes. At the qualitative level, the most significant novelty would be the possibility for massless particles to decay. And certain observations in astrophysics, which allow us to establish that photons of energies up to ∼ 1014 eV are stable, can then be used [305, 59, 334, 115] to set limits on schemes for departures from Lorentz symmetry.

For my purposes it suffices to consider the process γe+e. Let us start from the perspective of the PKV0 test theory, and therefore adopt the dispersion relation (13) and unmodified energy-momentum conservation. One easily finds a relation between the energy E γ of the incoming photon, the opening angle θ between the outgoing electron-positron pair, and the energy E+ of the outgoing positron (the energy of the outgoing electron is simply given by E γ E+). Setting n = 1 in (13) one finds that, for the region of phase space with m e ≪ E γ ≪ E p , this relation takes the form

$$\cos (\theta) \simeq {{{E_ +}({E_\gamma} - {E_ +}) + m_e^2 - \eta {E_\gamma}{E_ +}({E_\gamma} - {E_ +})/{E_p}} \over {{E_ +}({E_\gamma} - {E_ +})}},$$
(21)

where m e is the electron mass.

The fact that for η = 0 Eq. (21) would require cos(θ) > 1 reflects the fact that, if Lorentz symmetry is preserved, the process γ → e+e is kinematically forbidden. For η < 0 the process is still forbidden, but for positive η high-energy photons can decay into an electron-positron pair. In fact, for \({E_\gamma} \gg {(m_e^2{E_p}/\vert\eta \vert)^{1/3}}\) one finds that there is a region of phase space where cos(θ) < 1, i.e., there is a physical phase space available for the decay.

The energy scale \({(m_e^2{E_p})^{1/3}}\) ∼ 1013 eV is not too high for testing, since, as mentioned, in astrophysics we see photons of energies up to ∼ 1014 eV that are stable (they clearly travel safely some large astrophysical distances). The level of sensitivity that is within reach of these studies therefore goes at least down to values of (positive) η of order 1 and somewhat smaller than 1. This is what one describes as “Planck-scale sensitivity” in the quantum-spacetime phenomenology literature: having set the dimensionful deformation parameter to the Planck-scale value, the coefficient of the term that can be tested is of order 1 or smaller. However, specifically for the case of the photon-stability analysis it is rather challenging to transform this Planck-scale sensitivity into actual experimental limits.

Within PKV0 kinematics, for n = 1 and positive η of order 1, it would have been natural to expect that photons with ∼ 1014 eV energy are unstable. But the fact that the decay of 1014 eV photons is allowed by PKV0 kinematics of does not guarantee that these photons should rapidly decay. It depends on the relevant probability amplitude, whose evaluation goes beyond the reach of kinematics. Still, it is likely that these observations are very significant for theories that are compatible with PKV0 kinematics. For a theory that is compatible with PKV0 kinematics (with positive η) this evidence of stability of photons imposes the identification of a dynamical mechanism that essentially prevents photon decay. If one finds no such mechanism, the theory is “ruled out” (or at least its parameters are severely constrained), but in principle one could look endlessy for such a mechanism. A balanced approach to this issue must take into account that quantum-spacetime physics may well modify both kinematics and the strength (and nature) of interactions at a certain scale, and it might in principle do this in ways that cannot be accommodated within the confines of effective quantum field theory, but one should take notice of the fact that, even in some new (to-be-discovered) framework outside effective quantum field theory, it is unlikely that there will be very large “conspiracies” between the modifications of kinematics and the modifications of the strength of interaction. In principle, models based on pure kinematics are immune from certain bounds on parameters that are also derived also using descriptions of the interactions, and it is conceivable that in the correct theory the actual bound would be somewhat shifted from the value derived within effective quantum field theory. But in order to contemplate large differences in the bounds one would need to advocate very large and ad hoc modifications of the strength of interactions, large enough to compensate for the often dramatic implications of the modifications of kinematics. The challenge then is to find satisfactory criteria for confining speculations about variations of the strengths of interaction only within a certain plausible range. To my knowledge this has not yet been attempted, but it deserves high priority.

A completely analogous calculation can be done within the FTV0 test theory, and there one can easily arrive at the conclusion [377] that the FTV0 description of dynamics should not significantly suppress the photon-decay process. However, as mentioned, consistency with the effective-field-theory setup requires that the two polarizations of the photon acquire opposite-sign modifications of the dispersion relation. We observe in astrophysics some photons of energies up to ∼ 1014 eV that are stable over large distances, but as far as we know those photons could be all right-circular polarized (or all left-circular polarized). This evidence of stability of photons, therefore, is only applicable to the portion of the FTV0 parameter space in which both polarizations should be unstable (a subset of the region with |η+| > |η γ | and |η| > |η γ |).

3.3.2 Photon stability and modified energy-momentum conservation

So far I have discussed photon stability assuming that only the dispersion relation is modified. If the modification of the dispersion relation is instead combined with a modification of the law of energy-momentum conservation the results can change very significantly. In order to expose these changes in rather striking fashion let me consider the example of DSR-inspired laws of energy-momentum conservation for the case of γ → e+e:

$${E_\gamma} \simeq {E_ +} + {E_ -} - \eta {\vec p_ +} \cdot {\vec p_ -},$$
(22)
$${\vec p_\gamma} \simeq {\vec p_ +} + {\vec p_ -} - \eta {E_ +}{\vec p_ -} - \eta {E_ -}{\vec p_ +}.$$
(23)

Using these in place of ordinary conservation of energy-momentum, one ends up with a result for cos(θ) that is still of the form (A + B)/A but now with A = 2E+(E γ E+) + λE γ E+(E γ E+) and \(B = 2m_e^2\):

$$\cos (\theta) \simeq {{2{E_ +}({E_\gamma} - {E_ +}) + \eta {E_\gamma}{E_ +}({E_\gamma} - {E_ +}) + 2m_e^2} \over {2{E_ +}({E_\gamma} - {E_ +}) + \eta {E_\gamma}{E_ +}({E_\gamma} - {E_ +})}}.$$
(24)

Evidently, this formula always gives cos(θ) > 1, so there are combinations of modifications of the dispersion relation and modifications of energy-momentum conservation such that γe+e is still forbidden.

If the modification of the dispersion relation and the modification of the law of energy-momentum conservation are not matched exactly to get this result, then one can have the possibility of photon decay, but in some cases it can be further suppressed (in addition to the Planck-scale suppression) by the partial compensation between the two modifications.

The fact that the matching between modification of the dispersion relation and modification of the law of energy-momentum conservation that produces a stable photon is obtained using a DSR-inspired setup is not surprising [63]. The relativistic properties of the framework are clearly at stake in this derivation. A threshold-energy requirement for particle decay (such as the \({E_\gamma} \gg {(m_e^2{E_p}/\vert\eta \vert)^{1/3}}\) mentioned above) cannot be introduced as an observer-independent law, and is therefore incompatible with any relativistic (even DSR-relativistic) formulation of the laws of physics. In fact, different observers assign different values to the energy of a particle and, therefore, in the presence of a threshold-energy requirement for particle decay a given particle should be allowed to decay, according to some observers while being totally stable for others.

3.4 Pair-production threshold anomalies and gamma-ray observations

Another opportunity to investigate quantum-spacetime-inspired Planck-scale departures from Lorentz symmetry is provided by certain types of energy thresholds for particle-production processes that are relevant in astrophysics. This is a very powerful tool for quantum-spacetime phenomenology [327, 38, 73, 463, 512, 364, 307, 494], and, in fact, at the beginning of this review, I chose the evaluation of the threshold energy for photopion production, p + γCMBERp + π, as the basis for illustrating how the sensitivity levels that are within our reach can be placed in rather natural connection with effects introduced at the Planck scale.

I discuss the photopion production threshold analysis in more detail in Section 3.5. Here, I consider instead the electron-positron pair production process, γγe+e.

3.4.1 Modified dispersion relations and γγe+e

The threshold for γγe+e is relevant for studies of the opacity of our Universe to photons. In particular, according to the conventional (classical-spacetime) description, the IR diffuse extra-galactic background should give rise to the strong absorption of “TeV photons” (here understood as photons with energy 1 TeV < E < 30 TeV), but this prediction must be reassessed in the presence of violations of Lorentz symmetry.

To show that this is the case, let me start once again from the perspective of the PKV0 test theory, and analyze a collision between a soft photon of energy ϵ and a high-energy photon of energy E, which might produce an electron-positron pair. Using the dispersion relation (13) (for n = 1) and the (unmodified) law of energy-momentum conservation, one finds that for given soft-photon energy e, the process γγe+e is allowed only if E is greater than a certain threshold energy E th that depends on ϵ and \(m_e^2\), as implicitly codified in the formula (valid for ϵm e E th E p )

$${E_{th}}\epsilon + \eta {{E_{th}^3} \over {8{E_p}}} \simeq m_e^2.$$
(25)

The special-relativistic result \({E_{th}} = m_e^2/\epsilon\) corresponds to the η → 0 limit of (25). For |η| ∼ 1 the Planck-scale correction can be safely neglected as long as \(\epsilon \gg {(m_e^4/{E_p})^{1/3}}\). But eventually, for sufficiently small values of ϵ (and correspondingly large values of E th ) the Planck-scale correction cannot be ignored.

This provides an opportunity for a pure-kinematics test: if a 10 TeV photon collides with a photon of 0.03 eV and produces an electron-positron pair the case n = 1, η ∼ − 1 for the PKV0 test theory is ruled out. A 10 TeV photon and a 0.03 eV photon can produce an electron-positron pair according to ordinary special-relativistic kinematics (and its associated requirement \({E_{th}} = m_e^2/\epsilon\)), but they cannot produce an electron-positron pair according to PKV0 kinematics with n = 1 and η∼−1.

For positive η the situation is somewhat different. While negative η increases the energy requirement for electron-positron pair production, positive η decreases the energy requirement for electron-positron pair production. In some cases, where one would expect electron-positron pair production to be forbidden, the PKV0 test theory with positive η would instead allow it. But once a process is allowed there is no guarantee that it will actually occur, not without some information on the description of dynamics (that allows us to evaluate cross sections). As in the case of photon decay, one must conclude that a pure-kinematics framework can be falsified when it predicts that a process cannot occur (if instead the process is seen) but in principle it cannot be falsified when it predicts that a process is allowed. Here too, one should gradually develop balanced criteria taking into account the remarks I offer in Section 3.3.1 concerning the plausibility (or lack thereof) of conspiracies between modifications of kinematics and modifications of the strengths of interaction.

Concerning the level of sensitivity that we can expect to achieve in this case one can robustly claim that Planck-scale sensitivity is within our reach. This, as anticipated above, is best seen considering the “TeV photons” emitted by some blazars, for which (as they travel toward our Earth detectors) the photons of the IR diffuse extragalactic background are potential targets for electron-positron pair production. In estimating the sensitivity achievable with this type of analyses it is necessary to take into account the fact that, besides the form of the threshold condition, there are at least three other factors that play a role in establishing the level of absorption of TeV photons emitted by a given blazar: our knowledge of the type of signal emitted by the blazar (at the source), the distance of the blazar, and most importantly the density of the IR diffuse extragalactic background.

The availability of observations of the relevant type has increased very significantly over these past few years. For example, for the blazar “Markarian 501” (at a redshift of z = 0.034) and the blazar “H1426+428” (at a redshift of z = 0.129) robust observations up to the 20-TeV range have been reported [15, 16], and for the blazar “Markarian 421” (at a redshift of z = 0.031) observations of photons of energy up to 45 TeV has been reported [438], although a more robust signal is seen once again up to the 20-TeV range [355, 17].

The key obstruction for translating these observations into an estimate of the effectiveness of pair-production absorption comes from the fact that measurements of the density of the IR diffuse extragalactic background are very difficult, and as a result our experimental information on this density is still affected by large uncertainties [235, 536, 111, 278].

The observations do show convincingly that some absorption is occurring [15, 16, 438, 355, 17]. I should stress the fact that the analysis of the combined X-ray/TeV-gamma-ray spectrum for the Markarian 421 blazar, as discussed in Ref. [333], provides rather compelling evidence. The X-ray part of the spectrum allows one to predict the TeV-gamma-ray part of the spectrum in a way that is rather insensitive to our poor knowledge of the source. This in turn allows us to establish in a source-independent way that some absorption is occurring.

For the associated quantum-spacetime-phenomenology analysis, the fact that some absorption is occurring does not allow us to infer much: the analysis will become more and more effective as the quantitative characterization of the effectiveness of absorption becomes more and more precise (as measured by the amount of deviation from the level of absorption expected within a classical-spacetime analysis that would still be compatible with the observations). And we are not yet ready to make any definite statement about this absorption levels. This is not only a result of our rather poor knowledge of the IR diffuse extragalactic background, but it is also due to the status of the observations, which still presents us with some apparent puzzles. For example, it is not yet fully understood why, as observed by some [15, 355, 17, 536], there is a difference between the absorption-induced cutoff energy found in data concerning Markarian 421, \(E_{{\rm{mk}}421}^{{\rm{cutoff}}} \simeq 3.6\;{\rm{TeV}}\), and the corresponding cutoff estimate obtained from Markarian-501 data, \(E_{{\rm{mk}}501}^{{\rm{cutoff}}} \simeq 6.2\;{\rm{TeV}}\). And the observation of TeV γ-rays emitted by the blazar H1426+428, which is significantly more distant than Markarian 421 and Markarian 501, does show a level of absorption that is higher than the ones inferred for Markarian 421 and Markarian 501, but (at least assuming a certain description [16] of the IR diffuse extragalactic background) the H1426+428 TeV luminosity “seems to exceed the level anticipated from the current models of TeV blazars by far” [16].

Clearly, the situation requires further clarification, but it seems reasonable to expect that within a few years we should fully establish facts such as “γ-rays with energies up to 20 TeV are absorbed by the IR diffuse extragalactic background”.Footnote 18 This would imply that at least some photons with energy smaller than ∼ 200 meV can create an electron-positron pair in collisions with a 20 TeV γ-ray. In turn this would imply for the PKV0 test theory, with n = 1, that necessarily η ≥ −50 (i.e., either η is positive or η is negative with absolute value smaller than 50). This means that this strategy of analysis will soon take us robustly to sensitivities that are less than a factor of a 100 away from Planck-scale sensitivities, and it is natural to expect that further refinements of these measurements will eventually take us to Planck-scale sensitivity and beyond.

The line of reasoning needed to establish whether this Planck-scale sensitivity could apply to pure-kinematics frameworks is somewhat subtle. One could simplistically state that when we see a process that is forbidden by a certain set of laws of kinematics then those laws are falsified. However, in principle this statement is correct only when we have full knowledge of the process, including a full determination of the momenta of the incoming particles. In the case of the absorption of multi-TeV gamma rays from blazars it is natural to assume that this absorption be due to interactions with IR photons, but we are not in a position to exclude that the absorption be due to higher-energy background photons. Therefore, we should contemplate the possibility that the PKV0 kinematics be implemented within a framework in which the description of dynamics is such to introduce a large-enough modification of cross sections to allow absorption of multi-TeV blazar gamma rays by background photons of energy higher than 200 meV. As mentioned above repeatedly, I advocate a balanced perspective on these sorts of issues, which should not extend all the way to assuming wild conspiracies centered on very large changes in cross sections, even when testing a pure-kinematics framework. But, as long as a consensus on criteria for such a balanced approach is not established, it is difficult to attribute a quantitative confidence level to experimental bounds on a pure-kinematics framework through mere observation of some absorption of multi-TeV blazar gamma rays.

The concerns are not applicable to test theories that do provide a description of dynamics, such as the FTV0 test theory, with its effective-field-theory setup. However, for the FTV0 test theory one must take into account the fact that the modification of the dispersion relation carries the opposite sign to the two polarizations of the photon and might have an helicity dependence in the case of electrons and positrons. So, in the case of the FTV0 test theory, as long as observations only provide evidence of some absorption of TeV gamma rays (without much to say about the level of agreement with the amount of absorption expected in the classical-spacetime picture), and are, therefore, consistent with the hypothesis that only one of the polarizations of the photon is being absorbed, only rather weak limits can be established.

3.4.2 Threshold anomalies and modified energy-moment um conservation

For the derivation of threshold anomalies combining a modification of the law of energy-momentum conservation with the modification of the dispersion relation can lead to results that are very different from the case in which only the modifications of the dispersion relations are assumed. This is a feature already stressed in the case of the analysis of photon stability. In order to establish it also for threshold anomalies let me consider an example of the “DSR-inspired” modified law of energy-momentum conservation. I assume that the modification of the law of energy-momentum conservation for the case of γγe+e takes the form

$$E + \epsilon - {\eta \over {{E_p}}}\vec P \cdot \vec p \simeq {E_ +} + {E_ -} - {\eta \over {{E_p}}}{\vec p_ +} \cdot {\vec p_ -},$$
(26)
$$\vec P + \vec p + {\eta \over {{E_p}}}E\vec p + {\eta \over {{E_p}}}\epsilon \vec P \simeq {\vec p_ +} + {\vec p_ -} + {\eta \over {{E_p}}}{E_ +}{\vec p_ -} + {\eta \over {{E_p}}}{E_ -}{\vec p_ +},$$
(27)

where I denote with \(\vec P\) the momentum of the photon of energy E and I denote with \(\vec p\) the momentum of the photon of energy .

Using these (26), (27) and the “n = 1” dispersion relation, one obtains (keeping only terms that are meaningful for m e E th E p )

$${E_{th}} \simeq {{m_e^2} \over \epsilon},$$
(28)

i.e., one ends up with the same result as in the special-relativistic case.

This shows very emphatically that modifications of the law of energy-momentum conservation can compensate for the effects on threshold derivation produced by modified dispersion relations. The cancellation should typically be only partial, but in cases in which the two modifications are “matched exactly” there is no left-over effect. The fact that a DSR-inspired modification of the law of conservation of energy-momentum produces this exact matching admits a tentative interpretation that the interested reader can find in Refs. [58, 63].

3.5 Photopion production threshold anomalies and the cosmic-ray spectrum

In the preceding Section 3.4, I discussed the implications of possible Planck-scale effects for the process γγe+e, but this is not the only process in which Planck-scale effects can be important. In particular, there has been strong interest [327, 38, 73, 463, 305, 59, 115, 35, 431] in the analysis of the “photopion production” process, . As already stressed in Section 1.5, interest in the photopion-production process originates from its role in our description of the high-energy portion of the cosmic-ray spectrum. The “GZK cutoff” feature of that spectrum is linked directly to the value of the minimum (threshold) energy required for cosmic-ray protons to produce pions in collisions with CMBR photons [267, 558] (see, e.g., Refs. [240, 348]). The argument suggesting that Planck-scale modifications of the dispersion relation may significantly affect the estimate of this threshold energy is completely analogous to that discussed in preceding Section 3.4 for γγe+e. However, the derivation is somewhat more tedious: in the case of γγe+e the calculations are simplified by the fact that both outgoing particles have mass m e and both incoming particles are massless, whereas for the threshold conditions for the photopion-production process one needs to handle the kinematics for a head-on collision between a soft photon of energy and a high-energy particle of mass m p and momentum \({\vec k_p}\) producing two (outgoing) particles with masses m p ,m π and momenta \(\vec k_p{\prime},{\vec k_\pi}\). The threshold can then be conveniently [73] characterized as a relationship describing the minimum value, denoted by k p,th , that the spatial momentum of the incoming particle of mass m p must have in order for the process to be allowed for given value ϵ of the photon energy:

$${k_{p,th}} \simeq {{{{({m_p} + {m_\pi})}^2} - m_p^2} \over {4\epsilon}} + \eta {{k_{p,th}^{2 + n}} \over {4\epsilon E_p^n}}\left({{{m_p^{1 + n} + m_\pi ^{1 + n}} \over {{{({m_p} + {m_\pi})}^{1 + n}}}} - 1} \right)$$
(29)

(dropping terms that are further suppressed by the smallness of \(E_p^{- 1}\) and/or the smallness of ϵ or mp, π).

Notice that whereas in discussing the pair-production threshold relevant for observations of TeV gamma rays I had immediately specialized (13) to the case n = 1, here I am contemplating values of n that are even greater than 1. One could also admit n > 1 for the pair-production threshold analysis, but it would be a mere academic exercise, since it is easy to verify that in that case Planck-scale sensitivity is within reach only for n not significantly greater than 1. Instead (as I briefly stressed already in Section 1.5) the role of the photopion-production threshold in cosmic-ray analysis is such that even for the case of values of n as high as 2 (i.e., even for the case of effects suppressed quadratically by the Planck scale) Planck-scale sensitivity is not unrealistic. In fact, using for m p and m π the values of the masses of the proton and the pion and for ϵ a typical CMBR-photon energy one finds that for negative η of order 1 (effects introduced at the Planck scale) the shift of the threshold codified in (29) is gigantic for n = 1 and still observably large [38, 73] for n = 2.

For negative η the Planck-scale correction shifts the photopion-production threshold to higher values with respect to the standard classical-spacetime prediction, which estimates the photopion-production threshold scale to be of about 5 · 1019 eV. AssumingFootnote 19 that the observed cosmic rays of highest energies are protons, when the spectrum reaches the photopion-production threshold one should first encounter a pileup of cosmic rays with energies just in the neighborhood of the threshold scale, and then above the threshold the spectrum should be severely depleted. The pileup results from the fact that protons with above-threshold energy tend to lose energy through photopion production and slow down until their energy is comparable to the threshold energy. The depletion above the threshold is the counterpart of this pileup (protons emitted at the source with energy above the threshold tend to reach us, if they come to us from far enough away, with energy comparable to the threshold energy).

The availability in this cosmic-ray context of Planck-scale sensitivities for values of n all the way up to n = 2 was fully established by the year 2000 [38, 73]. The debate then quickly focused on establishing what exactly the observations were telling us about the photopion-production threshold. The fact that the AGASA cosmic-ray observatory was reporting [519] evidence of a behavior of the spectrum that was of the type expected in this Planck-scale picture generated a lot of interest. However, more recent cosmic-ray observations, most notably the ones reported by the Pierre Auger observatory [448, 8], appear to show no evidence of unexpected behavior. There is even some evidence [5] (see, however, the updated Ref. [11]) suggesting that to the highest-energy observed cosmic rays, one can associate some relatively nearby sources, and that all this is occurring at scales that could fit within the standard picture of the photopion-production threshold, without Planck scale effects.

These results reported by the Pierre Auger Observatory are already somewhat beyond the “preliminary” status, and we should soon have at our disposal very robust cosmic-ray data, which should be easily converted into actual experimental bounds on the parameters of Planck-scale test theories.

Among the key ingredients that are still missing I should assign priority to the mentioned issue of correlation of cosmic-ray observations with the large scale distribution of matter in the nearby universe and the issue of the composition of cosmic rays (protons versus heavy nuclei). The rapidly-evolving [5, 11] picture of correlations with matter in the nearby universe focuses on cosmic-ray events with energy ≥ 5.7 · 1019 eV, while the growing evidence of a significant heavy-nuclei component at high energies is limited so far at energies of ≤ 4 · 1019 eV. And this state of affairs, as notably stressed in Ref. [242], limits our insight on several issues relevant for the understanding of the origin of cosmic rays and the related issues for tests of Lorentz symmetry, since it leaves open several options for the nature and distance of the sources above and below 5 · 1019 eV.

Postponing more definite claims on the situation on the experimental side, let me stress, however, that there is indeed a lot at stake in these studies for the hypothesis of quantum-spacetime-induced Planck-scale departures from Lorentz symmetry. Even for pure-kinematics test theories this type of data analysis is rather strongly relevant. For example, the kinematics of the PKV0 test theory forbids (for negative η of order 1 and n ≤ 2) photopion production when the incoming proton energy is in the neighborhood of 5 · 1019 eV and the incoming photon has typical CMBR energies. For reasons already stressed (for other contexts), in order to establish a robust experimental limit on pure-kinematics scenarios using the role of the photopion-production threshold in the cosmic-ray spectrum, it would be necessary to also exclude that other background photons (not necessarily CMBR photons) be responsible for the observed cutoff.Footnote 20 It appears likely that such a level of understanding of the cosmic-ray spectrum will be achieved in the not-so-distant future.

For the FTV0 test theory, since it goes beyond pure kinematics, one is not subject to similar concerns [381]. However, the fact that it admits the possibility of different effects for the two helicities of the incoming proton, complicates and renders less sharp this type of cosmic-ray analyses. It does lead to intriguing hypotheses: for example, exploiting the possibility of helicity dependence of the Planck scale effect for protons, one can rather naturally end up with a scenario that predicts a pileup/cutoff structure somewhat similar to the one of the standard classical-spacetime analysis, but softer, as a result of the fact that only roughly half of the protons would be allowed to lose energy by photopion production.

For the photopion-production threshold one finds exactly the same mechanism, which I discussed in some detail for the pair-production threshold, of possible compensation between the effects produced by modified dispersion relations and the effects produced by modified laws of energy-momentum conservation. So, the analysis of frameworks where both the dispersion relation and the energy-momentum conservation law are modified, as typical in DSR scenarios [63], should take into account that added element of complexity.

3.6 Pion non-decay threshold and cosmic-ray showers

Also relevant to the analysis of cosmic-ray observations is another aspect of the possible implications of quantum-spacetime-motivated Planck-scale departures from Lorentz symmetry: the possibility of a suppression of pion decay at ultrahigh energies. While in some cases departures from Lorentz symmetry allow the decay of otherwise stable particles (as in the case of γe+e, discussed above, for appropriate choice of values of parameters), it is indeed also possible for departures from Lorentz symmetry to either introduce a threshold value of the energy of the particle, above which a certain decay channel for that particle is totally forbidden [179, 81], or introduce some sort of suppression of the decay probability that increases with energy and becomes particularly effective above a certain threshold value of the energy of the decaying particle [59, 115, 244]. This may be relevant [81, 59] for the description of the air showers produced by cosmic rays, whose structure depends rather sensitively on certain decay probabilities, particularly the one for the decay πγγ.

The possibility of suppression at ultrahigh energies of the decay πγγ has been considered from the quantum-gravity-phenomenology perspective primarily adopting PKV0-type frameworks [59, 115]. Using the kinematics of the PKV0 test theory one easily arrives [59] at the following relationship between the opening angle ϕ of the directions of the momenta of the outgoing photons, the energy of the pion (E π ) and the energies (E and E′ = E π E) of the outgoing photons:

$$\cos (\phi) = {{2E{E{\prime}} - m_\pi ^2 + 3\eta {E_\pi}E{E{\prime}}/{E_p}} \over {2E{E{\prime}} + \eta {E_\pi}E{E{\prime}}/{E_p}}}.$$
(30)

This relation shows that, for positive η, at high energies the phase space available to the decay is anomalously reduced: for a given value of E π certain values of E that would normally be accessible to the decay are no longer accessible (they would require cosθ > 1). This anomaly starts to be noticeable at pion energies of order \({(m_\pi ^2/{L_p})^{1/3}} \sim {10^{15}}\) eV, but only very gradually (at first only a small portion of the available phase space is excluded).

This is rather intriguing since there is a report [81] of experimental evidence of anomalies for the structure of the air showers produced by cosmic rays, particularly their longitudinal development. And it has been argued in Ref. [81] that these unexpected features of the longitudinal development of air showers could be explained in terms of a severely reduced decay probability for pions of energies of 1015 eV and higher. This is still to be considered a very preliminary observation, not only because of the need to acquire data of better quality on the development of air showers, but also because of the role [59] that our limited control of nonperturbative QCD has in setting our expectations for what air-shower development should look like without new physics.

It is becoming rather “urgent” to reassess this issue in light of recent data on cosmic rays and cosmic-ray shower development. Such an exercise has not been made for a few years now, and for the mentioned Auger data, with the associated debate on the composition of cosmic rays, the analysis of shower development (and, therefore, of the hypothesis of some suppression of pion decay) is acquiring increasing significance [509, 6, 36, 549].

As for the other cases in which I discuss effects of modifications of the dispersion relation for kinematics of particle reactions, for this pion-decay argument scenarios hosting both a modified dispersion relation and modifications of the law of conservation of energy-momentum, as typical in DSR scenarios, can lead to [63] a compensation of the correction terms.

3.7 Vacuum Cerenkov and other anomalous processes

The quantum-spacetime-phenomenology analyses I have reviewed so far have played a particularly significant role in the rapid growth of the field of quantum-spacetime phenomenology over the last decade. This is particularly true for the analyses of the pair-production threshold for gamma rays and of the photopion-production threshold for cosmic rays, in which the data relevant for the Planck-scale effect under study can be perceived as providing some encouragement for new physics. One can legitimately argue [463, 302] that the observed level of absorption of TeV gamma rays is low enough to justify speculations about “new physics” (even though, as mentioned, there are “conventional-physics descriptions” of the relevant data). The opportunities for Planck scale physics to play a role in the neighborhood of the GZK scale of the cosmic-ray spectrum are becoming slimmer, as stressed in Section 3.5, but still it has been an important sign of maturity for quantum-spacetime phenomenology to play its part in the debate that for a while was generated by the preliminary and tentative indications of an anomaly around the “GZK cutoff”. It is interesting how the hypothesis of a pion-stability threshold, another Planck-scale-motivated hypothesis, also plays a role in the assessment of the present status of studies of ultra-high-energy cosmic rays.

I am giving disproportionate attention to the particle-interaction analyses described in Sections 3.4, 3.5, 3.6 because they are the most discussed and clearest evidence in support of the claim that quantum-spacetime Planck-scale phenomenology does have the ability to discover its target new physics, so much so that some (however tentative) “experimental puzzles” have been considered and are being considered from the quantum-spacetime perspective.

But it is of important to also consider the implications of quantum-spacetime-inspired Planck-scale departures from Lorentz symmetry, and particularly Planck-scale modifications of the dispersion relation, for all possible particle-physics processes. And a very valuable type of particle-physics processes to be considered are the ones that are forbidden in a standard special-relativistic setup but could be allowed in the presence of Planck-scale departures from Lorentz symmetry. These processes could be called “anomalous processes”, and in the analysis of some of them one does find opportunities for Planck-scale sensitivity, as already discussed for the case of the process γee+ in Section 3.3.

For a comprehensive list (and more detailed discussion) of other analyses of anomalous processes, which are relevant for the whole subject of the study of possible departures from Lorentz symmetry (within or without quantum spacetime), readers can rely on Refs. [395, 308] and references therein.

I will just briefly mention one more significant example of an anomalous process that is relevant from a quantum-spacetime-phenomenology perspective: the “vacuum Cerenkov” process, eeγ, which in certain scenarios [395, 308, 41] with broken Lorentz symmetry is allowed above a threshold value of electron energy. This is analyzed in close analogy with the discussion in Section 3.3 for the process γee+ (which is another example of anomalous particle interaction).

Since we have no evidence at present of vacuum-Cerenkov processes, the relevant analyses are of the type that sets limits on the parameters of some test theories. Clearly, this observational evidence against vacuum-Cerenkov processes is also relevant for pure-kinematics test theories, but in ways that it is difficult to quantify, because of the dependence on the strength of the interactions (an aspect of dynamics). So, here too, one should contemplate the implications of these findings from the perspective of the remarks offered in Section 3.3.1 concerning the plausibility (or lack thereof) of conspiracies between modifications of kinematics and modifications of the strengths of interaction.

Within the FTV0 test theory one can rigorously analyze the vacuum-Cerenkov process, and there actually, if one arranges for opposite-sign dispersion-relation correction terms for the two helicities of the electron, one can in principle have helicity-changing eeγ at any energy (no threshold), but estimates performed [395, 308] within the FTV0 test theory show that the rate is extremely small at low energies.

Above the threshold for helicity-preserving eeγ the FTV0 rates are substantial, and this in particular would allow an analysis with Planck-scale sensitivity that relies on observations of 50-TeV gamma rays from the Crab nebula. The argument is based on several assumptions (but all apparently robust) and its effectiveness is somewhat limited by the combination of parameters allowed by FTV0 setup and by the fact that for these 50-TeV gamma rays we observe from the Crab nebula we can only reasonably guess a part of the properties of the emitting particles. According to the most commonly adopted model the relevant gamma rays are emitted by the Crab nebula as a result of inverse Compton processes, and from this one infers [395, 308, 40] that for electrons of energies up to 50 TeV the vacuum Cerenkov process is still ineffective, which in turn allows one to exclude certain corresponding regions of the FTV0 parameter space.

3.8 In-vacuo dispersion for photons

Analyses of thresholds for particle-physics processes, discussed in the previous Sections 3.4, 3.5, 3.6, and 3.7, played a particularly important role in the development of quantum-spacetime phenomenology over the last decade, because the relevant studies were already at Planck-scale sensivity. In June 2008, with the launch of the Fermi (/GLAST) space telescope [436, 201, 440, 3, 4, 413] we gained access to Planck-scale effects also for in-vacuo dispersion as well. These studies deserve particular interest because they have broad applicability to quantum-spacetime test theories of the fate of Lorentz/Poincaré symmetry at the Planck scale. In the previous Sections 3.4, 3.5, 3.6, and 3.7, I stressed how the analyses of thresholds for particle-physics processes provided information that is rather strongly model dependent, and dependent on the specific choices of parameters within a given model. The type of insight gained through in-vacuo-dispersion studies is instead significantly more robust.

A wavelength dependence of the speed of photons is obtained [66, 497] from a modified dispersion relation, if one assumes the velocity to still be described by υ = dE/dp. In particular, from the dispersion relation of the PKV0 test theory one obtains (at “intermediate energies”, m < EE p ) a velocity law of the form

$$v \simeq 1 - {{{m^2}} \over {2{E^2}}} + \eta {{n + 1} \over 2}{{{E^n}} \over {E_p^n}}.$$
(31)

Arguments and semi-heuristic derivations in support of this type of speed law for massless particles have been reportedFootnote 21 both in the spacetime-noncommutativity literature (see, e.g., Refs. [70, 191]) and in the LQG literature (see, e.g., Refs. [247, 33, 523]).

On the basis of the speed law (31) one would find that two simultaneously-emitted photons should reach the detector at different times if they carry different energy. And this time-of-arrival-difference effect can be significant [66, 491, 459, 539, 232] in the analysis of short-duration gamma-ray bursts that reach us from cosmological distances. For a gamma-ray burst, it is not uncommonFootnote 22 that the time traveled before reaching our Earth detectors be of order T ∼ 1017 s. Microbursts within a burst can have very short duration, as short as 10−3 s, and this should suggest that the photons that compose such a microburst are all emitted at the same time, up to an uncertainty of 10−3 s. Some of the photons in these bursts have energies that extend even above [3] 10 GeV, and for two photons with energy difference of order ΔE ∼ 10 GeV a ΔE/E p speed difference over a time of travel of 1017 s would lead [74] to a difference in times of arrival of order \(\Delta t\sim\eta T\Delta {E \over {{E_p}}}\sim\eta \cdot 1\) which is not negligibleFootnote 23 with respect to the typical variability time scales one expects for the astrophysics of gamma-ray bursts. Indeed, it is rather clear [74, 264] that the studies of gamma-ray bursts conducted by the Fermi telescope provide us access to testing Planck-scale effects, in the linear-modification (“n = 1”) scenario.

These tests do not actually use Eq. (31) since for redshifts of 1 and higher, spacetime curvature/expansion is a very tangible effect. And this introduces nonnegligible complications. Most results in quantum-spacetime research hinting at modifications of the dispersion relation, and possible associated energy/momentum dependence of the speed of massless particles, were derived working essentially in the flat-spacetime/Minkowski limit: it is obvious that analogous effects would also be present when spacetime expansion is switched on, but it is not obvious how formulas should be generalized to that case. In particular, the formula (31) is essentially unique for ultrarelativistic particles in the flat-spacetime limit: we are only interested in leading-order formulas and the difference between (E/E p )n and \({p^2}{E^{n - 2}}/E_p^n\) is negligible for ultrarelativistic particles (with p2m2). How spacetime expansion renders these considerations more subtle is visible already in the case of de Sitter expansion. Adopting conformal coordinates in de Sitter spacetime, with metric ds2 = dt2a2(t) dx2 (and a(t) = eHt) we have for ultrarelativistic particles (with p2m2) the velocity formula

$$v \simeq {a^{- 1}}(t) - {{{m^2}} \over {2{p^2}}}a(t),$$
(32)

so already in the undeformed case the coordinate velocity (from which physical time delays will be derived) depends not only on momentum but also on the scale factor a(t). It is not obvious how one should describe leading-order Planck-scale corrections to this, going as some power of momentum. It is natural to make the ansatz

$$v \simeq {a^{- 1}}(t) - {{{m^2}} \over {2{p^2}}}a(t) + \eta {{n + 1} \over 2}{{{p^n}} \over {E_p^n}}{a^k}(t),$$
(33)

with the integer k being at this point one more phenomenological parameter to be determined experimentally. Arguments on value of the integer k would be most “natural” were reported in Refs. [228, 474, 303, 229], ultimately leading to a consensus [303, 229] converging on describing k = −n as the most natural choice. I shall not dwell much on this: let me just confirm that I would also give priority to the case k = −n, but doing this in such a way as not to by-pass the obvious fact that the value of k would have to be determined experimentally (and nature might well have chosen a value for k different from −n).

Assuming that indeed k = −n one would expect for simultaneously emitted massless particles in a Universe parametrized by the cosmological parameters Ω m , ΩΛ, H0 (evaluated today) a momentum-dependent difference in times of arrival at a telescope given by

$$\Delta t \simeq \eta {{n + 1} \over {2{H_0}}}{{{p^n}} \over {E_p^n}}\int\nolimits_0^z {d{z{\prime}}{{{{(1 + {z{\prime}})}^n}} \over {\sqrt {{\Omega _m}{{(1 + {z{\prime}})}^3} + {\Omega _\Lambda}}}},}$$
(34)

where p is the momentum of the particle when detected at the telescope.

Actually, Planck-scale sensitivity to in-vacuo disperson can also be provided by observations of TeV flares from certain active galactic nuclei, at redshifts much smaller than 1 (cases in which spacetime expansion is not really tangible). In particular, studies of TeV flares from Mk 501 and PKS 2155-304 performed by the MAGIC [233] and HESS [285] observatories have established [218, 29, 226, 18, 10, 129] bounds on the scale of dispersion, for the linear-effects (“n = 1”) scenario, at about 1/10 of the Planck scale.

But the present best constraints on quantum-spacetime-induced in-vacuo dispersion are derived from observations of gamma-ray bursts reported by the Fermi telescope. There are, so far, four Fermi-detected gamma-ray bursts that are particularly significant for the hypothesis of in-vacuo dispersion: GRB 090816C [3], GRB 090510 [4], GRB 090902B [2], GRB 090926A [482]. The data for each one of these bursts has the strength of constraining the scale of in-vacuo dispersion, for the linear-effects (“n = 1”) scenario, at better than 1/10 of the Planck scale. In particular, GRB 090510 was a truly phenomenal short burst [4] and the structure of its observation allows us to conservatively establish that the scale of in-vacuo dispersion, for the linear-effects (“n = 1”) scenario, is higher than 1.2 times the Planck scale.

The simplest way to do such analyses is to take one high-energy photon observed from the burst and take as reference its delay Δt with respect to the burst trigger: if one could exclude conspiracies such that the specific photon was emitted before the trigger (we cannot really exclude it, but we would consider that as very unlikely, at least with present knowledge) evidently Δt would have to be bigger than any delay caused by the quantum-spacetime effects. This, in turn, allows us, for the case of GRB 090510, to establish the limit at 1.2 times the Planck scale [4]. And, interestingly, even more sophisticated techniques of analysis, using not a single photon but the whole structure of the high-energy observation of GRB 090510, also encourage the adoption of a limit at 1.2 times the Planck scale [4]. It has also been noticed [427] that if one takes at face value the presence of high-energy photon bunches observed for GRB 090510, as evidence that these photons were emitted nearly simultaneously at the source and they are being detected nearly simultaneously, then the bound inferred could be even two orders of magnitude above the Planck scale [427].

I feel that at least the limit at 1.2 times the Planck scale is reasonably safe/conservative. But it is obvious that here we would feel more comfortable with a wider collection of gamma-ray bursts usable for our analyses. This would allow us to balance, using high statistics, the challenges for such studies of in-vacuo dispersion that (as for other types of studies based on observations in astrophysics discussed earlier) originate from the fact that we only have tentative models of the source of the signal. In particular, the engine mechanisms causing the bursts of gamma rays also introduce correlations at the source between the energy of the emitted photons and the time of their emission. This was in part expected by some astrophysicists [459], and Fermi data allows one to infer it at levels even beyond expectations [3, 4, 527, 376, 187, 256]. On a single observation of gamma-ray-burst events such at-the-source correlations are, in principle, indistinguishable from the effect we expect from in-vacuo dispersion, which indeed is a correlation between times of arrival and energies of the photons. And another challenge I should mention originates from the necessity of understanding at least partly the “precursors” of a gamma-ray burst, another feature that was already expected and to some extent known [362], but recently came to be known as a more significant effect than expected [4, 530].

So, we will reach a satisfactory “comfort level” with our bounds on in-vacuo dispersion only with “high statistics”, a relatively large collection [74] of gamma-ray bursts usable for our analyses. High statistics always helps, but in this case it will also provide a qualitatively new handle for the data analysis: a relatively large collection of high-energy gamma-ray bursts, inevitably distributed over different values of redshift, would help our analyses also because comparison of bursts at different redshifts can be exploited to achieve results that are essentially free from uncertainties originating from our lack of knowledge of the sources. This is due to the fact that the structure of in-vacuo dispersion is such that the effect should grow in predictable manner with redshift, whereas we can exclude that the exact same dependence on redshift (if any) could characterize the correlations at the source between the energy of the emitted photons and the time of their emission.

In this respect we might be experiencing a case of tremendous bad luck: as mentioned we really still only have four gamma-ray bursts to work with, GRB 090816C [3], GRB 090510 [4], GRB 090902B [2], GRB 090926A [482], but on the basis of how Fermi observations had been going for the first 13 months of operation we were led to hope that by this time (end of 2012), after 50 months of operation of Fermi, we might have had as many as 15 such bursts and perhaps 4 or 5 bursts of outstanding interest for in-vacuo dispersion, comparable to GRB 090510. These four bursts we keep using from the Fermi data set were observed during the first 13 months of operation (in particular GRB 090510 was observed during the 10th month of operation) and we got from Fermi nothing else of any use over the last 37 months. If our luck turns around we should be able to claim for quantum-spacetime phenomenology a first small but tangible success: ruling out at least the specific hypothesis of Planck-scale in-vacuo dispersion, at least specifically for the case of linear-effects (“n= 1”).

This being said about the opportunities and challenges facing the phenomenology of in-vacuo dispersion, let me, in closing this section, offer a few additional remarks on the broader picture. From a quantum-spacetime-phenomenology perspective it is noteworthy that, while in the analyses discussed in the previous Sections 3.4, 3.5, 3.6, and 3.7, the amplifier of the Planck-scale effect was provided by a large boost, in this in-vacuo-dispersion case the amplification is due primarily to the long propagation times, which essentially render the analysis sensitive to the accumulation [52] of very many minute Planck-scale effects. For propagation times that are realistic in controlled Earth experiments, in which one perhaps could manage to study the propagation of photons of TeV energies, over distances of 106 m, the in-vacuo dispersion would still induce, even for n = 1, only time delays of order ∼ 10−18 s.

In-vacuo-dispersion analyses of gamma-ray bursts are also extremely popular within the quantum-spacetime-phenomenology community because of the very limited number of assumptions on which they rely. One comes very close to having a direct test of a Planck-scale modification of the dispersion relation. In comparing the PKV0 and the FTV0 test theories, one could exploit the fact that whereas for the PKV0 test theory the Planck-scale-induced time-of-arrival difference would affect a multi-photon microburst by producing a difference in the “average arrival time” of the signal in different energy channels, within the FTV0 test theory, for an ideally unpolarized signal, one would expect a dependence of the time-spread of a microburst that grows with energy, but no effect for the average arrival time in different energy channels. This originates from the polarization dependence imposed by the structure of the FTV0 test theory: for low-energy channels the whole effect will be small, but in the highest-energy channels, the fact that the two polarizations travel at different speed will manifest itself as spreading in time of the signal, without any net average-time-of-arrival effect for an ideally unpolarized signal. Since there is evidence that at least some gamma-ray bursts are somewhat far from being ideally unpolarized (see evidence of polarization reported, e.g., in Refs. [359, 556, 528]), one could also exploit a powerful correlation: within the FTV0 test theory one expects to find some bursts with sizeable energy-dependent average-time-of-arrival differences between energy channels (for bursts with some predominant polarization), and some bursts (the ones with no net polarization) with much less average-time-of-ar11rival differences between energy channels but a sizeable difference in time spreading in the different channels. Polarization-sensitive observations of gamma-ray bursts would allow one to look directly for the polarization dependence predicted by the FTV0 test theory.

Clearly, these in-vacuo dispersion studies using gamma rays in the GeV-TeV range provide us at present with the cleanest opportunity to look for Planck-scale modifications of the dispersion relation. Unfortunately, while they do provide us comfortably with Planck-scale sensitivity to linear (n = 1) modifications of the dispersion relation, they are unable to probe significantly the case of quadratic (n = 2) modifications.

And, while, as stressed, these studies apply to a wide range of quantum-spacetime scenarios with modified dispersion relations, mostly as a result of their insensitivity to the whole issue of description of dynamical aspects of a quantum-spacetime theory, one should be aware of the fact that it might be inappropriate to characterize these studies as tests that must necessarily apply to all quantum-spacetime pictures with modified dispersion relations. Most notably, the assumption of obtaining the velocity law from the dispersion relation through the formula υ = dE/dp may or may not be valid in a given quantum-spacetime picture. Validity of the formula υ = dE/dp essentially requires that the theory is still “Hamiltonian”, at least in the sense that the velocity along the x axis is obtained from the commutator with a Hamiltonian (υ x ∼ [x, H]), and that the Heisenberg commutator preserves its standard form ([x, p x ] ∼ so that x/∂p x ). Especially this second point is rather significant since heuristic arguments of the type also used to motivate modified dispersion relations suggest [22, 122, 323, 415, 243, 408] that the Heisenberg commutator might have to be modified in the quantum-spacetime realm.

3.9 Quadratic anomalous in-vacuo dispersion for neutrinos

Observations of gamma rays in the GeV-TeV range could provide us with a very sharp picture of Planck-scale-induced dispersion, if it happens to be a linear (n = 1) effect, but, as stressed above, one would need observations of similar quality for photons of significantly higher energies in order to gain access to scenarios with quadratic (n = 2) effects of Planck-scale-induced dispersion. The prospect of observing photons with energies up to 1018 eV at ground observatories [471, 74] is very exciting, and should be pursued very forcefully [74], but it represents an opportunity whose viability still remains to be fully established. And in any case we expect photons of such high energies to be absorbed rather efficiently by background soft photons (e.g., CMBR photons) so that we could not observe them from very distant sources.

One possibility that could be considered [65] is the one of 1987a-type supernovae; however such supernovae are typically seen at distances not greater than some 105 light years. And the fact that neutrinos from 1987a-type supernovae can be definitely observed up to energies of at least tens of TeV’s is not enough to compensate for the smallness of the distances (as compared to typical gamma-ray-burst distances). As a result, using 1987a-type supernovae one might have serious difficulties [65] even to achieve Planck-scale sensitivity for linear (n = 1) modifications of the dispersion relation, and going beyond linear order clearly is not possible.

The most advanced plans for in-vacuo-dispersion studies with sensitivity up to quadratic (n = 2) Planck-scale modifications of the dispersion relation actually exploit [230, 168, 61, 301] (also see, for a similar argument within a somewhat different framework, Ref. [116]) once again the extraordinary properties of gamma-ray bursters, but their neutrino emissions rather than their production of photons. Indeed, according to current models [411, 543], gamma-ray bursters should also emit a substantial amount of high-energy neutrinos. Some neutrino observatories should soon observe neutrinos with energies between 1014 and 1019 eV, and one could either (as it appears to be more feasible [301]) compare the times of arrival of these neutrinos emitted by gamma-ray bursters to the corresponding times of arrival of low-energy photons or compare the times of arrivals of different-energy neutrinos (which, however, might require larger statistics than it seems natural to expect).

In assessing the significance of these foreseeable studies of neutrino propagation within different test theories, one should again take into account issues revolving around the possibility of anomalous reactions. In particular, in spite of the weakness of their interactions with other particles, within an effective-field-theory setup neutrinos can be affected by Cherenkov-like processes at levels that are experimentally significant [175], though not if the scale of modification of the dispersion relation is as high as the Planck scale. The recent overall analysis of modified dispersion for neutrinos in quantum field theory given in Ref. [379] shows that for the linear (n = 1) case we are presently able to establish constraints at levels of about 10−2 times the Planck scale (and even further from the Planck scale for the quadratic case, n = 2).

3.10 Implications for neutrino oscillations

It is well established [179, 141, 225, 83, 421, 169] that flavor-dependent modifications to the energy-momentum dispersion relations for neutrinos may lead to neutrino oscillations even if neutrinos are massless. This point is not directly relevant for the three test theories I have chosen to use as frameworks of reference for this review. The PKV0 test theory adopts universality of the modification of the dispersion relation, and also the FTV0 test theory describes flavor-independent effects (its effects are “nonuniversal” only in relation to polarization/helicity). Still, I should mention this possibility both because clearly flavor-dependent effects may well attract gradually more interest from quantum-spacetime phenomenologists (some valuable analyses have already been produced; see, e.g., Refs. [395, 308] and references therein), and because even for researchers focusing on flavor-independent effects, it is important to be familiar with constraints that may be set on flavor-dependent scenarios (those constraints, in a certain sense, provide motivation for the adoption of flavor independence).

Most studies of neutrino oscillations induced by violations of Lorentz symmetry were actually not motivated by quantum-gravity/quantum-spacetime research (they were part of the general Lorentz-symmetry-test research area) and assumed that the flavor-dependent violations would take the form of a flavor-dependent speed-of-light scale [179], which essentially corresponds to the adoption of a dispersion relation of the type (13), but with n = 0, and flavor-dependent values of η. A few studies have considered the caseFootnote 24 n = 1 with flavor-dependent η, which is instead mainly of interest from a quantum-spacetime perspective,Footnote 25 and found [141, 225, 421] that for n = 1 from Eq. (13) one naturally ends up with oscillations lengths that depend quadratically on the inverse of the energies of the particles (LE−2), whereas in the case n = 0 (flavor-dependent speed-of-light scale) such a strong dependence on the inverse of the energies is not possible [141]. In principle, this opens an opportunity for the discovery of manifestations of the flavor-dependent n = 1 case through studies of neutrino oscillations [141, 421]; however, at present there is no evidence of a role for these effects in neutrino oscillations and, therefore, the relevant data analyses produce bounds [141, 421] on flavor dependence of the dispersion relation.

In a part of the next section (4.6), I shall comment again on neutrino oscillations, but in relation to the possible role of quantum-spacetime-induced decoherence (rather than Lorentz-symmetry violations).

3.11 Synchrotron radiation and the Crab Nebula

Another opportunity to set limits on test theories with Planck-scale modified dispersion relations is provided by the study of the implications of modified dispersion relations for synchrotron radiation [306, 62, 309, 378, 231, 420, 39]. An important point for these analyses [306, 309, 378] is the observation that in the conventional (Lorentz-invariant) description of synchrotron radiation one can estimate the characteristic energy E c of the radiation through a semi-heuristic derivation [300] leading to the formula

$${E_c} \simeq {1 \over {R \cdot \delta \cdot [{v_\gamma} - {v_e}]}},$$
(35)

where υ e is the speed of the electron, υ γ is the speed of the photon, δ is the angle of outgoing radiation, and R is the radius of curvature of the trajectory of the electron.

Assuming that the only Planck-scale modification in this formula should come from the velocity law (described using υ = dE/dp in terms of the modified dispersion relation), one finds that in some instances the characteristic energy of synchrotron radiation may be significantly modified by the presence of Planck-scale modifications of the dispersion relation. This originates from the fact that, for example, according to (31), for n = 1 and η < 0, an electron cannot have a speed that exceeds the value \(v_e^{\max} \simeq 1 - (3/2){(\vert \eta \vert {m_e}/{E_p})^{2/3}}\), whereas in SR υ e can take values arbitrarily close to 1.

As an opportunity to test such a modification of the value of the synchrotron-radiation characteristic energy one can attempt to use data [306] on photons emitted by the Crab nebula. This must be done with caution since the observational information on synchrotron radiation being emitted by the Crab nebula is rather indirect: some of the photons we observe from the Crab nebula are attributed to sychrotron processes, but only on the basis of a (rather successful) model, and the value of the relevant magnetic fields is also not directly measured. But the level of Planck-scale sensitivity that could be within the reach of this type of analysis is truly impressive: assuming that indeed the observational situation has been properly interpreted, and relying on the mentioned assumption that the only modification to be taken into account is the one of the velocity law, one could [306, 378] set limits on the parameter η of the PKV0 test theory that go several orders of magnitude beyond |η| ∼ 1, for negative η and n = 1, and even for quadratic (n = 2) Planck-scale modifications the analysis would fall “just short” of reaching Planck-scale sensitivity (“only” a few orders of magnitude away from |η| ∼ 1 sensitivity for n = 2).

However, the assumptions of this type of analysis, particularly the assumption that nothing changes but the velocity law, cannot even be investigated within pure-kinematics test theories, such as the PKV0 test theory. Synchrotron radiation is due to the acceleration of the relevant charged particles and, therefore, implicit in the derivation of the formula (35) is a subtle role for dynamics [62]. From a quantum-field-theory perspective, the process of synchrotron-radiation emission can be described in terms of Compton scattering of the electrons with the virtual photons of the magnetic field, and its analysis is, therefore, rather sensitive even to details of the description of dynamics in a given theory. Indeed, essentially this synchrotron-radiation phenomenology has focused on the FTV0 test theory and its generalizations, so that one can rely on the familiar formalism of quantum field theory. Making reasonably prudent assumptions on the correct model of the source one can establish [378] valuable (sub-Planckian!) experimental bounds on the parameters of the FTV0 test theory.

3.12 Birefringence and observations of polarized radio galaxies

As I stressed already a few times earlier in this review, the FTV0 test theory, as a result of a rigidity of the adopted effective-field-theory framework, necessarily predicts birefringence, by assigning different speeds to different photon polarizations. Birefringence is a pure-kinematics effect, so it can also be included in straightforward generalizations of the PKV0 test theory, if one assigns a different dispersion relation to different photon polarizations and then assumes that the speed is obtained from the dispersion relation via the standard υ = dE/dp relation.

I have already discussed some ways in which birefringence may affect other tests of dispersion-inducing (energy-dependent) modifications of the dispersion relation, as in the example of searches of time-of-arrival/energy correlations for observations of gamma-ray bursts. The applications I already discussed use the fact that for large enough travel times birefringence essentially splits a group of simultaneously-emitted photons with roughly the same energy and without characteristic polarization into two temporally and spatially separated groups of photons, with different circular polarization (one group being delayed with respect to the other as a result of the polarization-dependent speed of propagation).

Another feature that can be exploited is the fact that even for travel times that are somewhat shorter than the ones achieving a separation into two groups of photons, the same type of birefringence can already effectively erase [261, 262] any linear polarization that might have been there to begin with, when the signal was emitted. This observation can be used in turn to argue that for given magnitude of the birefringence effects and given values of the distance from the source it should be impossible to observe linearly polarized light, since the polarization should have been erased along the way.

Using observations of polarized light from distant radio galaxies [395, 261, 262, 158, 342, 495] one can comfortably achieve Planck-scale sensitivity (for “n = 1” linear modifications of the dispersion relation) to birefringence effects following this strategy. In particular, the analysis reported in Ref. [261, 262] leads to a limit of |η γ | < 2 · 10−4 on the parameter η γ of the FTV0 test theory. And more recent studies of this type allowed even more stringent bounds to be established(see Refs. [395, 365] and references therein).

Interestingly, even for this strategy based on the effect of removal of linear polarization, gamma-ray bursts could in principle provide formidable opportunities. And there was a report [173] of observation of polarized MeV gamma rays in the prompt emission of the gamma-ray burst GRB 021206, which would have allowed very powerful bounds on energy-dependent birefringence to be established. However, Ref. [173] has been challenged (see, e.g., Ref. [481, 124]). Still, experimental studies of polarization for gamma-ray bursts continue to be a very active area of research (see, e.g., Refs. [359, 556, 528]), and it is likely that this will gradually become the main avenue for constraining quantum-spacetime-induced birefringence.

3.13 Testing modified dispersion relations in the lab

Over this past decade there has been growing awareness of the fact that data analyses with good sensitivity to effects introduced genuinely at the Planck scale are not impossible, as once thought. It is at this point well known, even outside the quantum-gravity/quantum-spacetime community, that Planck-scale sensitivity is achieved in certain (however rare) astrophysics studies. It would be very very valuable if we could establish the availability of analogous tests in controlled laboratory setups, but this is evidently more difficult, and opportunities are rare and of limited reach. Still, I feel it is important to keep this goal as a top priority, so in this Section I mention a couple of illustrative examples, which can at least show that laboratory tests are possible. Considering these objectives it makes sense to focus again on quantum-spacetime-motivated Planck-scale modifications of the dispersion relation, so that the estimates of sensitivity levels achievable in a controlled laboratory setup can be compared to the corresponding studies in astrophysics.

One possibility is to use laser-light interferometry to look for in-vacuo-dispersion effects. In Ref. [68] two examples of interferometric setups were discussed in some detail, with the common feature of making use of a frequency doubler, so that part of the beam would be for a portion of its journey through the interferometer at double the reference frequency of the laser beam feeding the interferometer. The setups must be such that the interference pattern is sensitive to the fact that, as a result of in-vacuo dispersion, there is a nonlinear relation between the phase advancement of a beam at frequency ω and a beam at frequency 2ω. For my purposes here it suffices to discuss briefly one such interferometric setup. Specifically, let me give a brief description of a setup in which the frequency (or energy) is the parameter characterizing the splitting of the photon state, so the splitting is in energy space (rather than the more familiar splitting in configuration space, in which two parts of the beam actually follow geometrically different paths). The frequency doubling could be accomplished using a “second harmonic generator” [487] so that if a wave reaches the frequency doubler with frequency ω then, after passing through the frequency doubler, the outgoing wave in general consists of two components, one at frequency ω and the other at frequency 2ω.

If two such frequency doublers are placed along the path of the beam at the end, one has a beam with several components, two of which have frequency 2ω: the transmission of the component that left the first frequency doubler as a 2ω wave, and another component that is the result of frequency doubling of that part of the beam that went through the first frequency doubler without change in the frequency. Therefore, the final 2ω beam represents an interferometer in energy space.

As shown in detail in Ref. [68] the intensity of this 2ω beam takes a form of type

$${I^{(2\omega)}} = {I_a} + {I_b}\cos \, (\alpha + ({k{\prime}} - 2k)L),$$
(36)

where L is the distance between the two frequency doublers, I a and I b are L-independent (they depend on the amplitude of the original wave and the effectiveness of the frequency doublers [68]), the phase α is also L-independent and is obtained combining several contributions to the phase (both a contribution from the propagation of the wave and a contribution introduced by the frequency doublers [68]), k is the wave number corresponding to the frequency ω through the dispersion relation, and k′ is the wave number corresponding to the frequency 2ω through the dispersion relation (since the dispersion relation is Planck-scale modified one expects departures from the special-relativistic result k′ = 2k).

Since the intensity only depends on the distance L between the frequency doublers through the Planck-scale correction to the phase, (k′ − 2k) L, by exploiting a setup that allows one to vary L, one should rather easily disentangle the Planck-scale effect. And one finds [68] that the accuracy achievable with modern interferometers is sufficient to achieve Planck-scale sensitivity (e.g., sensitivity to |η| ∼ 1 in the PKV0 test theory with n = 1). It is rather optimistic to assume that the accuracy achieved in standard interferometers would also be achievable with this peculiar setup, particularly since it would require the optics aspects of the setup (such as lenses) to work with that high accuracy simultaneously with two beams of different wavelength. Moreover, it would require some very smart techniques to vary the distance between the frequency doublers without interfering with the effectiveness of the optics aspects of the setup. So, in practice we would not presently be capable of using such setups to set Planck-scale-sensitive limits on in-vacuo dispersion, but the fact that the residual obstructions are of rather mundane technological nature encourages us to think that in the not-so-distant future tests of Planck-scale in-vacuo dispersion in controlled laboratory experiments will be possible.

Besides in-vacuo dispersion, another aspect of the physics of Planck-scale modified dispersion relations that we should soon be able to test in controlled laboratory experiments is the one concerning anomalous thresholds, at least in the case of the γγe+e process that I already considered from an astrophysics perspective in Section 3.4. It is not so far from our present technical capabilities to set up collisions between 10 TeV photons and 0.03 eV photons, thereby reproducing essentially the situation of the analysis of blazars that I discussed in Section 3.4. And notice that with respect to the analysis of observations of blazars, such controlled laboratory studies would give much more powerful indications. In particular, for the analysis of observations of blazars discussed in Section 3.4, a key limitation on our ability to translate the data into experimental bounds on parameters of a pure-kinematics framework was due to the fact that (even assuming we are indeed seeing absorption of multiTeV photons) the astrophysics context does not allow us to firmly establish whether the absorption is indeed due to the IR component of the intergalactic background radiation (as expected) or instead is due to a higher-energy component of the background (in which case the absorption would instead be compatible with some corresponding Planck-scale pictures). If collisions between 10 TeV and 0.03 eV photons in the lab do produce pairs, since we would in that case have total control of the properties of the particles in the in state of the process, we would then have firm pure-kinematics bounds on the parameters of certain corresponding Planck scale test theories (such as the PKV0 test theory).

These laboratory studies of Planck-scale-modified dispersion relations could also be adapted to the FTV0 test theory, by simply introducing some handles on the polarization of the photons that are placed under observation (also see Refs. [254, 255]), with sensitivity not far from Planck-scale sentivity in controlled laboratory experiments.

3.14 On test theories without energy-dependent modifications of dispersion relations

Readers for which this review is the first introduction to the world of quantum-spacetime phenomenology might be surprised that this long section, with an ambitious title announcing related tests of Lorentz symmetry, was so heavily biased toward probing the form of the energy-momentum dispersion relation. Other aspects of the implications of Lorentz (and Poincaré) symmetry did intervene, such as the law of energy-momentum conservation and its deformations (and the form of the interaction vertices and their deformations), and are in part probed through the data analyses reviewed, but the feature that clearly is at center stage is the structure of the dispersion relation. The reason for this is rather simple: researchers that recognize themselves as “quantum-spacetime phenomenologists” will consider a certain data analysis as part of the field if that analysis concerns an effect that can be robustly linked to quantum properties of spacetime (rather than, for example, some classical-field background) and if the analysis exposes the availability of Planck-scale sensitivities, in the sense I described above. At least according to the results obtained so far, the aspect of Lorentz/Poincaré symmetry that is most robustly challenged by the idea of a quantum spacetime is the form of the dispersion relation, and this is also an aspect of Lorentz/Poincaré symmetry for which the last decade of work on this phenomenology robustly exposed opportunities for Planck-scale sensitivities.

For the type of modifications of the dispersion relation that I considered in this section we have at present rather robust evidence of their applicability in certain noncommutative pictures of spacetime, where the noncommutativity is very clearly introduced at the Planck scale. And several independent (although all semi-heuristic) arguments suggest that the same general type of modified dispersion relations should apply to the “Minkowski limit” of LQG, a framework where a certain type of discretization of spacetime structure is introduced genuinely at the Planck scale. Unfortunately, these two frameworks are so complex that one does not manage to analyze spacetime symmetries much beyond building a “case” (and not a waterproof case) for modified dispersion relations.

A broader range of Lorentz-symmetry tests could be valuable for quantum-spacetime research, but without the support of a derivation it is very hard to argue that the relevant effects are being probed with sensitivities that are significant from a quantum-spacetime/Planck-scale perspective. Think, for example, of a framework, such as the one adopted in Ref. [179], in which the form of the dispersion relation is modified, but not in an energy-dependent way: one still has dispersion relations of the type \({E^2} = c_\# ^2{p^2} + m_\# ^2\), but with a different value of the velocity scale c# for different particles. This is not necessarily a picture beyond the realm of possibilities one would consider from a quantum-spacetime perspective, but there is no known quantum-spacetime picture that has provided direct support for it. And it is also essentially impossible to estimate what accuracy must be achieved in measurements of cprotoncelectron, in order to reach Planck-scale sensitivity. Some authors qualify as “Planckian magnitude” of this type of effect, the case in which the dimensionless parameter has value on the order of the ratio of the mass of the particles involved in the process versus the Planck scale (as in cprotoncelectron ∼ (mproton ± melectron)/E p ) but this arbitrary criterion clearly does not amount to establishing genuine Planck-scale sensitivity, at least as long as we do not have a derivation starting with spacetime quantization at the Planck scale that actually finds such magnitudes of these sorts of effects.

Still, it is true that the general structure of the quantum-gravity problem and the structure of some of the quantum spacetimes that are being considered for the Minkowski limit of quantum gravity might host a rather wide range of departures from classical Lorentz symmetry. Correspondingly, a broad range of Lorentz-symmetry tests could be considered of potential interest.

I shall not review here this broader Lorentz-symmetry-tests literature, since it is not specific to quantum-spacetime research (these are tests that could be done and in large part were done even before the development of research on Lorentz symmetries from within the quantum-spacetime literature) and it has already been reviewed very effectively in Ref. [395]. Let me just stress that for these broad searches of departures from Lorentz symmetry one needs test theories with many parameters. Formalisms that are well suited to a systematic program of such searches are already at an advanced stage of development [180, 181, 340, 343, 123, 356, 357] (also see Ref. [239]), and in particular the “standard-model-extension” framework [180, 181, 340, 343] has reached a high level of adoption of preference for theorists and experimentalists as the language in which to characterize the results of systematic multi-parameter Lorentz-symmetry-test data analyses. The “Standard Model Extension” was originally conceived [340] as a generalization of the Standard Model of particle-physics interactions restricted to power-counting-renormalizable correction terms, and as such it was of limited interest for the bulk of the quantum-spacetime/quantum-gravity community: since quantum gravity is not a (perturbatively) renormalizable theory, many quantum-spacetime researchers would be unimpressed with Lorentz-symmetry tests restricted to powercounting-renormalizable correction terms. However, over these last few years [123] most theorists involved in studies of the “Standard Model Extension” have started to add correction terms that are not powercounting renormalizable.Footnote 26 A good entry point for the literature on limits on the parameters of the “Standard Model Extension” is provided by Refs. [395, 123, 346].

From a quantum-gravity-phenomenology perspective it is useful to contemplate the differences between alternative strategies for setting up a “completely general” systematic investigation of possible violations of Lorentz symmetry. In particular, it has been stressed (see, e.g., Refs. [356, 357]) that violations of Lorentz symmetry can be introduced directly at the level of the dynamical equations, without assuming (as done in the Standard Model Extension) the availability of a Lagrangian generating the dynamical equations. This is more general than the Lagrangian approach: for example, the generalized Maxwell equation discussed in Ref. [356, 357] predicts effects that go beyond the Standard Model Extension. And charge conservation, which automatically comes from the Lagrangian approach, can be violated in models generalizing the field equations [356, 357]. The comparison of the Standard-Model-Extension approach and of the approach based on generalizations introduced directly at the level of the dynamical equations illustrates how different “philosophies” lead to different strategies for setting up a “completely general” systematic investigation of possible departures from Lorentz symmetry. By removing the assumption of the availability of a Lagrangian, the second approach is “more general”. Still, no “general approach” can be absolutely general: in principle one could always consider removing an extra layer of assumptions. As the topics I have reviewed in this section illustrate, from a quantum-spacetime-phenomenology perspective it is not necessarily appropriate to seek the most general parametrizations. On the contrary, we would like to single out some particularly promising candidate quantum-spacetime effects (as in the case of modified dispersion relations) and focus our efforts accordingly.

4 Other Areas of UV Quantum-Spacetime Phenomenology

Tests of Lorentz symmetry, and particularly of the form of the dispersion relation, probably make up something on the order of half of the whole quantum-spacetime-phenomenology literature. The other half is spread over a few other, evidently less developed, research lines. Nonetheless, for some of these other research lines the literature has reached some nonnegligible maturity, and even those that are at preliminary stages of development could be precious potential opportunities for quantum-spacetime research.

Evidently, the most challenging part of this review work concerns these other components of quantum-spacetime-phenomenology research, since it is harder to summarize and organize intelligibly the results and scopes of research programs that are still in early stages of development. But it is also the part of this review that could be most valuable, since there is already some work [52, 308, 485, 294] attempting to summarize and review, although more concisely than done here in Section 3, the results obtained by the quantum-spacetime phenomenology of Planck-scale modified dispersion relations.

In reporting on the work done in these other quantum-spacetime-phenomenology reseach lines, I shall use as one of the guiding concepts the one of assessing whether a given research program concerns UV quantum-spacetime effects or IR quantum-spacetime effects. The typical situation for a UV quantum-spacetime effect is that it takes the form of correction terms that grow with the energy of the particles, and whose significance is therefore increasingly high as the energy of the particles increases. For any given standard-physics (no-quantum-spacetime) prediction A0, this will take the general form

$${A_0} \rightarrow {A_0}\left({1 + {\eta _\#}{{{E^n}} \over {E_p^n}}} \right)\quad \;({\rm{with}}\;{\rm{context}}/{\rm{theory}}\;{\rm{specific}}\;{\rm{numerical}}\;{\rm{factor}}{\eta _\#})$$
(37)

This is the type of quantum-spacetime effect that one traditionally one to be inevitably produced by any form of spacetime quantization, and is the focus of this section. The possibility of “IR quantum-spacetime effects”, effects that are due to Planck-scale spacetime quantization but are significant in some deep-IR regime, came to the attention of the community only rather recently, emerging mainly from work on “IR/UV mixing in quantum spacetime”, and I shall focus on it in the next Section 5.

4.1 Preliminary remarks on fuzziness

In this review, as natural for phenomenology, I am primarily looking at quantum-spacetime effects from the perspective of the type of pre-quantum-spacetime laws that they affect (so we have “departures from classical spacetime symmetries”, “violations of the quantum-mechanical coherence”, and so on). And our experimental opportunities are such that the main focus is on how spacetime quantization could affect particle propagation (and, for a restricted sample of phenomenological opportunities, interactions among particles). For this section on “other UV quantum-spacetime effects” a significant role (particularly noticeable in Sections 4.2, 4.3, and 4.5 will be played by the idea that quantum-spacetime effects may introduce an additional irreducible contribution to the fuzziness of the worldlines of particles.

This should be contrasted to the content of Section 3, which focuses mainly on phenomenological proposals involving mechanisms for systematic departures from the currently-adopted laws of propagation of (and interaction among) particles. In most cases such systematic effects amount to departures from the predictions of Lorentz symmetry (such as a systematic dependence on the velocity of a massless particle on its energy, which would produce a systematic difference between the arrival times of high-energy and low-energy photons that are simultaneously emitted). If it ends up being the case that the correct quantum-spacetime picture does not provide us any such systematic effects, then we will be left with non-systematic effects, i.e., “fuzziness” [57]. In looking for such effects we can be guided by the intuition that spacetime quantization might act as an environment inducing apparently random fluctuations in certain observables. For example, by distance fuzziness one does not describe an effect that would systematically gives rise to larger (or smaller) distance-measurement results, but rather one describes a sort of new uncertainty principle for distance measurements.

This distinction between systematic and nonsystematic effects can easily be characterized for any given observable \(\hat X\) for which the pre-quantum-spacetime theoretical prediction can be described in terms of a “prediction” X and, possibly, a fundamental (ordinarily quantum mechanical) “uncertainty” δX. The effects of spacetime quantization in general could lead [57] to a new prediction X′ and a new uncertainty δX′. One would attribute to quantum spacetime the effects

$${(\Delta X)_{{\rm{QG}}}} \equiv {X{\prime}} - X$$
(38)

and

$${(\delta X)_{{\rm{QG}}}} \equiv \delta {X{\prime}} - \delta X.$$
(39)

One can speak of purely systematic quantum-spacetime effects when (ΔX)QG ≠ 0 and (δX)QG = 0, while the opposite case, (ΔX)QG = 0 and (δX)QG > 0, can be qualified as purely non-systematic. It is likely that for many observables both types of quantum-spacetime effect may be present simultaneously, but it is natural that at least the first stages of development of a quantum-spacetime phenomenology on an observable \(\hat X\) be focused on one or the other special case ((δX)QG = 0 or (ΔX)QG = 0). Clearly, the discusions of effects given in Section 3 were all with (δX)QG = 0, while for most of the proposals discussed in this section the main focus will be on the effects characterized by (ΔX)QG = 0.

4.2 Spacetime foam, distance fuzziness and interferometric noise

The scenarios for spacetime fuzziness that are most studied from a quantum-spacetime perspective are intuitively linked to the notion of “spacetime foam”, championed by Wheeler and studied extensively in the quantum-gravity literature, more or less directly, for several decades (see, e.g., Refs. [547, 203, 178, 281, 150, 250, 553]). From a modern perspective one is attempting to characterize the physics of matter particles as effectively occurring in an “environment” of short-distance quantum-gravitational degrees of freedom. And one may expect that for propagating particles with wavelength much larger than the Planck length, when it may be appropriate to integrate out these short-distance quantum-gravitational degrees of freedom, the main residual effect of short-distance gravity would indeed be an additional contribution to the fuzziness of worldlines.

While in full-fledged quantum-spacetime theories, such a LQG, such analyses are still beyond our reach, one can find partial encouragement for this intuition in recent progress on the understanding of quantum gravity in 3D (2+1-dimensional) spacetime. Studies such as the ones reported in Refs. [75, 237, 412, 152, 259, 238, 313, 441] establish that for 3D quantum gravity (exploiting the much lower complexity than for the 4D case) we are able to perform the task needed for studies of spacetime foam: we can actually integrate out gravity, reabsorbing its effects into novel properties for a gravity-free propagation of particles. And foaminess is formalized in the fact that this procedure integrating out gravity leaves us with a theory of free particles in a noncommutative spacetime, Refs. [75, 237, 259, 238], specifically a spacetime with “Lie-algebra noncommutativity”

$$[{x_\alpha},{x_\beta}] = i\kappa _{\alpha \beta}^\gamma {x_\gamma}$$
(40)

(in particular the choice of \(\kappa _{\alpha \beta}^\gamma\) as the Levi-Civita tensor is the one suggested by the direct derivation given in Ref. [441]). In other words, upon integrating out the gravitational degrees of freedom, the quantum dynamics of matter fields coupled to 3D gravity is effectively described [238] by matter fields in a noncommutative spacetime, a fuzzy/foamy spacetime.

While the only direct/deductive derivations of such results are for 3D quantum gravity, it is natural to take that as a starting point for the study of real 4D quantum gravity, whereas analogous results are still unavailable. And a sizable literature has been devoted to the search of possible experimental manifestations of “spacetime foam”. Several subsections of this section concern related phenomenological proposals. I start with spacetime-foam test theories, whose structure renders them well suited for interferometric tests.

4.2.1 Spacetime foam as interferometric noise

The first challenge for a phenomenology investigating the possibility of spacetime foam originates in the fact that Wheeler’s spacetime foam intuition, while carrying strong conceptual appeal, cannot on its own be used for phenomenology, since it is not characterized in terms of observable properties. The phenomenology then is based on test theories inspired by the spacetime-foam intuition.

A physical/operative definition of at least one aspect of spacetime foam is given in Refs. [51, 54, 53, 433] and is well suited for a phenomenology based on interferometryFootnote 27. According to this definition the fuzziness/foaminess of a spacetime is established[51, 54, 53, 433] on the basis of an analysis of strain noiseFootnote 28 in interferometers set up in that spacetime. In achieving their remarkable accuracy, modern interferometers must deal with several classical-physics strain noise sources (e.g., thermal and seismic effects induce fluctuations in the relative positions of the test masses). And importantly strain noise sources associated with effects due to ordinary quantum mechanics are also significant for modern interferometers (the combined minimization of photon shot noise and radiation pressure noise leads to a noise source that originates from ordinary quantum mechanics [486]). One can give an operative definition [51, 53] of fuzzy/foamy spacetime in terms of a corresponding additional source of strain noise. A theory in which the concept of distance is fundamentally fuzzy in this operative sense would be such that the read-out of an interferometer would still be noisy (because of quantum-spacetime effects) even in the idealized limit in which all classical-physics and ordinary-quantum-mechanics noise sources are completely eliminated/subtracted.

4.2.2 A crude estimate for laser-light interferometers

Before even facing the task of developing test theories for spacetime foaminess in interferometry it is best to first check whether there is any chance of using realistic interferometric setups to uncover effects as small as expected if introduced at the Planck scale. A first encouraging indication comes from identifying the presence of a huge amplifier in modern interferometers: a well-known quality of these modern interferometers is their ability to detect gravity waves of amplitude ∼ 10−18 m by carefully monitoring distances of order ∼ 104 m, and this should provide opportunities for an “amplifier” that is of order 1022.

This also means that our modern interferometers have outstanding control over noise sources, which is ideal for the task at hand, involving scenarios for how quantum-spacetime effects may contribute an additional source of noise in such interferometers. Clearly, the noise we could conceivably see emerging from spacetime quantization should be modeled in terms of some random vibrations. Evidently random vibrations are particularly difficult to characterize. For example there is in general no spendable notion of “amplitude” of random vibrations. The most fruitful way to characterize them, also for the purposes of comparing their “intensity” to other non-random sources of vibration that might affect the same system, is by using the power spectral density. Let me introduce some notation, which will prove useful when I move on to discuss crude models of quantum-spacetime-induced noise. For this I simple-mindedly consider the readout of an interferometer as h(t), given by the position x(t) of a mirror divided by a reference length scale L (h(t) = x(t)/L), and adjust the reference frame so that on average x(t) vanishes, μ x = 0. Given some rules for fluctuations of this readout one can indeed be interested in its power spectral density Σ(ω), in principle computable via [486]

$$\Sigma (\omega) = \int\nolimits_{- \infty}^\infty {d\tau \;{\mu _{[h(t)h(t + \tau)]}}\;{e^{- 2\pi i\omega \tau}},}$$
(41)

where μ[h(t)h(t+τ)] depends only on τ and is the value expected on average for h(t)h(t + τ) in the presence of the vibration/fluctuation process of interest in the analysis.

Having characterized the noise source in terms of its power spectral density we can then easily compute some primary characteristics, such as its root mean square deviation σ h , which for cases of zero-mean noise, such as the one I am considering, will be given by the expectation of h2. This can be expressed in terms of the power spectral density as follows [486]

$$\sigma _h^2 = {\mu _{{h^2}}} = \int\nolimits_{- \infty}^\infty {d\omega \Sigma (\omega)}.$$
(42)

In experimental practice, for a frequency-band limited signal (fmax) and a finite observation time (Tobs), this relation will take the shape

$$\sigma _h^2 \simeq \int\nolimits_{1/{T_{{\rm{obs}}}}}^{{f_{\max}}} {d\omega \Sigma (\omega)}.$$
(43)

In modern interferometers such as LIGO [9, 1] and VIRGO [157, 12] the power spectral density of the noise is controlled at a level of Σ(ω) ∼ 10−44 Hz−1 at observation frequencies ω of about 100 Hz, and in turn this (also considering the length of the arms of these modern interferometers) implies [9, 1, 157, 12] that for a gravity wave with 100 Hz frequency the detection threshold is indeed around 10−18 m.

The challenge here for quantum-spacetime phenomenologists is to characterize the relevant quantum-spacetime effects in terms of a novel contribution Σ[QG] (ω) to the power spectral density of the noise. If at some point experimentalists manage to bring the total noise Σ(ω), for some range of observation frequencies ω, below the level predicted by a certain quantum-spacetime test theory, then that test theory will be ruled out.

Is there any hope for a reasonable quantum-spacetime test theory to predict noise at a level comparable to the ones that are within the reach of modern interferometry? Well, this is the type of question that one can only properly address in the context of models, but it may be valuable to first use dimensional analysis, assuming the most optimistic behavior of the quantum-spacetime effects, and check if the relevant order of magnitude is at all providing any encouragement for the painful (if at all doable) analysis of the relevant issues in quantum-spacetime models.

To get what is likely to be the most optimistic (and certainly the simplest, but not necessarily the most realistic) Planck-scale estimate of the effect, let us assume that quantum-spacetime noise is “white noise”, Σ[QG](ω) = Σ0 (frequency independent), so that it is fully specified by a single dimensionful number setting the level of this white noise. And since Σ carries units of Hz−1 one easily notices [54] a tempting simple naive estimate in terms of the Planck length and the speed-of-lightFootnote 29 scale: Σ0L p /c, which, since L p /c ∼ 10−44 Hz−1, encouragingly happens to be just at the mentioned level of sensitivity of LIGO-VIRGO-type interferometers. This provides some initial encouragement for a phenomenology based on interferometric noise, though only within the limitations of a very crude and naive estimate.

4.2.3 A simple-minded mechanism for noise in laser-light interferometers

My next task is going beyond assuming for simplicity that the quantum-spacetime noise be white and beyond adopting a naive dimensional-analysis estimate of what could constitute a Planck-scale level of such a noise. The ultimate objective here would be to analyze an interferometer in the framework of a compelling quantum-spacetime theory, but this is beyond our capabilities at present. However, we can start things off by identifying some semi-heuristic pictures (the basis for a test theory) with effects introduced genuinely at the Planck scale that turn out to produce strain noise at the level accessible with modern interferometers.

Having in mind this objective, let us take as a starting point for a first naive picture of spacetime fuzziness the popular arguments suggesting that the Planck scale should also set some absolute limitation on the measurability of distances. And let us (optimistically) assume that this translates to the fact that any experiment in which a distance L plays a key role (meaning that one is either measuring L itself or the observable quantity under study depends strongly on L) is affected by a mean square deviation \(\sigma _L^2\).

It turns out to be useful [51, 53] to consider this \(\sigma _L^2\) as a possible stepping stone toward the strain-noise power-spectrum estimate. And a particularly striking picture arises by assuming that the distances L between the test masses of an interferometer be affected by Planck-length fluctuations of random-walk type occurring at a rate of one per Planck time (∼ 10−44 s), so that [51, 53]

$$\sigma _L^2 \simeq {L_p}T \simeq {L_p}L\quad [{\rm{random}}\;{\rm{walk}}\;{\rm{case}}],$$
(44)

where T is the time scale over which the experiment monitors the distance L, assuming the use of ultrarelativistic particles (TL).

It is noteworthy that \(\sigma _L^2 \simeq {L_p}T\) can be motivated independently (without having in mind the idea of such effective spacetime fluctuations) on the basis of some aspects of the quantum-gravity problem [50]. And the study of certain quantum-spacetime pictures that have been of interest to the quantum-gravity community, such as the κ-Minkowski noncommutative spacetime of Eq. (4), provide some support for this random-walk picture: from [x j , t] = iL P x j one could guess roughly a law of the form \(\sigma _x^2 \sim \delta x\delta t \sim {L_P}x\).

Some arguments inspired by the “holography paradigm” for quantum gravity [433, 430, 170] suggest even weaker effects, characterized by

$$\sigma _L^2 \simeq L_p^{4/3}{L^{2/3}}\quad [{\rm{holography}} - {\rm{inspired}}\;{\rm{case}}].$$
(45)

Interestingly, this ansatz \(\sigma _L^2 \simeq L_p^{4/3}{L^{2/3}}\) had been independently proposed in the quantum-gravity literature on the basis of a perspective on the quantum-gravity problem (see Ref. [432, 319, 209]), which originally in no way involved spacetime fuzziness.

Probably the most conservative (and pessimistic) expectation for spacetime fuzziness one can find in the quantum-spacetime literature is the one omitting any opportunity for amplification by the involvement of a long observation time (see, e.g., parts of Refs. [249, 293])

$$\sigma _L^2 \simeq L_p^2\quad [{\rm{weakest}}\;{\rm{case}}].$$
(46)

The random-walk case is the most typical textbook study case for random noise. Its power spectral density goes like ω−2, so one should have

$$\Sigma _{\rm{L}}^{[{\rm{QG}};{\rm{rw}}]} \simeq {{{L_P}} \over {{\omega ^2}}},$$
(47)

which gives

$$\sigma _h^2\sim {{\sigma _L^2} \over {{L^2}}} \simeq {1 \over {{L^2}}}\int\nolimits_{1/{T_{{\rm{obs}}}}}^{{f_{\max}}} {d\omega {{{L_P}} \over {{\omega ^2}}} \simeq {{{L_P}{T_{{\rm{obs}}}}} \over {{L^2}}}}$$
(48)

(so, for LTobs one indeed finds \(\sigma _L^2 \sim {L_P}L\)).

Analogously, one can associate to the “holographic noise” of Eq. (45) a power spectral density going as ω−5/3, so one should have

$$\Sigma _L^{[{\rm{QG}};{\rm{holo}}]} \simeq {{L_P^{4/3}} \over {{\omega ^{5/3}}}},$$
(49)

which indeed gives \(\sigma _L^2 \simeq L_p^{4/3}T_{{\rm{obs}}}^{2/3} \simeq L_p^{4/3}{L^{2/3}}\).

And, finally, for the \(\sigma _L^2 \simeq L_p^2\) case of Eq. (46), a rough but valuable approximate description of the power spectral density goes like ω−1, so one should have

$$\Sigma _L^{[{\rm{QG}};{\rm{weak}}]} \simeq {{L_P^2} \over \omega},$$
(50)

which indeed gives \(\sigma _L^2 \simeq L_p^2\).

It is tempting to obtain from these estimates of the quantum-spacetime-induced distance uncertainty an estimate for the quantum-spacetime-induced strain noise, by simply dividing by the square of the length of the arms of the interferometer, \({\Sigma ^{[{\rm{QG}}]}} = \Sigma _L^{[{\rm{QG}}]}/{L^2}\). This would be the way to proceed if we were converting distance noise into strain noise, but really here we are obtaining a rough estimate of strain noise from an estimate of distance uncertainty, and I shall therefore proceed in some sense sub judice (see in particular my comments below concerning the large number of photons collectively used for producing the accurate measurements of a modern interferometer). Assuming that indeed \({\Sigma ^{[{\rm{QG}}]}} = \Sigma _L^{[{\rm{QG}}]}/{L^2}\), and taking as reference value an observation frequency of ω ∼ 100 Hz, one would get for the three cases I discussed the following estimates of strain noise at 100 Hz, for arm lengths of a few kilometers:

$${\Sigma ^{[{\rm{QG}};{\rm{weak}}]}}\sim {10^{- 78}}{\rm{H}}{{\rm{z}}^{- 1}},\quad {\Sigma ^{[{\rm{QG}};{\rm{holo}}]}}\sim {10^{- 52}}{\rm{H}}{{\rm{z}}^{- 1}},\quad {\Sigma ^{[{\rm{QG}};{\rm{rw}}]}}\sim {10^{- 38}}{\rm{H}}{{\rm{z}}^{- 1}}.$$
(51)

These estimates are rather naive but it is nonetheless interesting to compare them to the levels of noise control achieved experimentally. As mentioned, around 100 Hz both LIGO and VIRGO achieve noise control at the level of strain noise of Σ ∼ 10−44 Hz−1, so estimates like Σ[QG;weak] and Σ[QG;holo] would be safe, but the estimate Σ[QG;rw] must be excluded: the estimate Σ[QG;rw] would assign more noise of quantum-spacetime origin than the total noise that LIGO and VIRGO managed to control (which would include the hypothetical quantum-spacetime-induced noise). In spite of the crudeness of the derivations I discussed so far, this does give a rather worthy input for those who fancy the random-walk picture, as I shall stress in Section 4.2.4.

Before I get to that issue, let me stress that there is a possible source of confusion for terminology (and content) in the literature. In the quantum-gravity literature there has been some discussion for several years of “holography-inspired noise” in the sense of my Eq. (49) and of Refs. [433, 430, 170]. More recently, a different mechanism for quantum-spacetime-induced noise, also labeled as “holography inspired”, was proposed in a series of papers by Hogan [287, 286, 288]. There is no relation between the two “holography-inspired” proposals for quantum-spacetime-induced interferometric noise. I do not think it is particularly important at the present time to establish which (if either) of the two proposals is more directly inspired by holography. I must instead stress that the holographic noise of Refs. [433, 430, 170] is a rather mature proposal, centered on Eq. (49) and meaningful at least as a quantum-spacetime test theory in the sense I just described. Instead it is probably fair to describe the alternative version of holographic noise more recently proposed in Ref. [287, 286, 288] as a young proposal still looking for some maturity: it does not amount to any however-wild variant of the description of interferometric noise I summarized here, and actually it is claimed [288] to be immune not only to the sort of interferometric noise I discussed in this Section but also to all other effects that have been typically associated with spacetime quantization in the literature. It would be a quantum-spacetime picture whose effects “can only be detected in an experiment that coherently compares transverse positions over an extended spacetime volume to extremely high precision, and with high time resolution or bandwidth” [288]. Evidently some work is still needed on the conceptual aspects (as a rigorous theory of spacetime quantization) and on the phenomenological aspects (as a computably predictive and broadly applicable test theory of spacetime quantization) of this proposal. Only time will tell if this present lack of maturity is due to intrinsic unsurmountable limitations of the proposal or is simply a result of the fact that the proposal was made only rather recently (so there was not much time for this maturity to be reached). I should note that at some point, in spite of its lack of maturity, this proposal started to attract some pronounced interest in relation to reports by the GEO600 interferometer [550] of unexplained excess noise [373]: it had been claimed [286] that Hogan’s version of holographic noise could match exactly the anomaly that was being reported by GEO600. However, it appears that experimenters at GEO600 have recently achieved a better understanding of their noise sources, and no unexplained contribution is at this point reported (this is at least implicit in Ref. [462] and is highlighted at http://www.aei.mpg.de/hannover-en/05-research/GEO600/). The brief season of the “GEO600 anomaly” (at some point known among specialists as the “mystery noise”) is over.

4.2.4 Insight already gained and ways to go beyond it

At the present time the “state of the art” of phenomenologically-spendable descriptions of Planck-scale-induced strain noise does not go much beyond the simple-minded estimates I just described in relation to Eqs. (47), (49), and (50). But some lessons were nonetheless learned, as usually happens even with the most humble phenomenology. And these lessons do point toward some directions worthy of exploration in the future. In this section I highlight some of these lessons and possible future developments.

Among the few steps of simple derivation, which I described in Section 4.2.3, evidently much scrutiny should be particularly directed toward the assumption \({\Sigma ^{[{\rm{QG}}]}} = \Sigma _L^{[{\rm{QG}}]}/{L^2}\): I motivated some candidate forms for \(\Sigma _L^{[{\rm{QG}}]}\) using essentially the sort of arguments that usually allow us to establish uncertainty principles for single particles, such as the ones taking as a starting point a postulated noncommutativity of single-particle coordinates; however, the strain noise Σ[QG] relevant for our interferometers is not at all a single-particle feature. Let me use the example of random-walk fuzziness for illustrating how the relationship between single-particle quantum-spacetime arguments and interferometric strain noise could be more subtle than assumed in \({\Sigma ^{[{\rm{QG}}]}} = \Sigma _L^{[{\rm{QG}}]}/{L^2}\). For this, I shall follow Ref. [57] (a similar thesis was also reported in Ref. [170]). I specialize the more general idea of random-walk quantum-spacetime fuzziness in the sense of assuming that each single photon in an interferometer experiences a random-walk path: a random Planck-length fluctuation per Planck-time would affect the path of each photon of the beam. This would imply in particular that as a photon goes from one mirror of the interferometer to the other, over a distance L, it reaches its destination with an uncertainty corresponding to \(\sigma _L^2 \sim {L_P}T \sim {L_P}L\). However, the interferometer (and this is key to its outstanding sensitivity) does not depend on determining the position of each single photon in the beam: on the contrary the key observable is the average position of the photons composing the beam, which may be viewed as the putative “position of the mirror” (when such a beam reaches the mirror). If L is now viewed as the distance between positions of mirrors defined in this way, rather than as the distance of propagation of an individual photon, then evidently the result is an estimate \(\sigma _L^2 \sim {L_P}T \sim {L_P}L/{N_\gamma}\), where N γ is the (very large!) number of photons contributing to each such determination of the “position of the mirror”.

While the noise levels produced by a random-walk ansatz assuming \({\Sigma ^{[{\rm{QG}}]}} = \Sigma _L^{[{\rm{QG}}]}/{L^2}\) are, as stressed in Section 4.2.3, already ruled out by the achievements of LIGO and VIRGO, this single-particle picture of a random-walk scenario, which evidently leads us to assume

$${\Sigma ^{[{\rm{QG}}]}} = {{\Sigma _L^{[{\rm{QG}}]}} \over {{N_\gamma}{L^2}}}$$
(52)

is still safely compatible with the noise results of LIGO and VIRGO, thanks to the large N γ suppression.

This observation is not specific to the random-walk scenario. A similar N γ suppression could naturally be expected for the holographic noise scenario of Eq. (49). As discussed in the previous Section 4.2.3, that holographic noise scenario would be safe from LIGO/VIRGO bounds even without the N γ suppression. (In some sense that holographic noise scenario would turn into unpleasantly “too safe from LIGO/VIRGO”, i.e., probably beyond the reach of any foreseeable interferometric experiment, if it were to take into account the plausible N γ suppression).

Concerning the scenario for weak quantum-spacetime-induced fuzziness, the one of Eq. (50), contemplating the possibility of an N γ suppression is of mere academic interest: those noise levels are so low, even without the possible additional N γ suppression, that we should exclude their testability for the foreseeable future.

But for random-walk noise and for the holographic-noise scenario of Eq. (49) this issue of a possible N γ suppression needs to be investigated and understood. This is probably not for the LIGO/VIRGO season: LIGO and VIRGO have not found any excess noise so far, and at this point it is unlikely they will ever find it. But a completely new drawing board for phenomenology would materialize with the advent of LISA [282]: LISA will operate at lower observational frequencies ω than LIGO/VIRGO-type interferometers, which is important from the quantum-spacetime perspective since both random-walk noise, as described by Eq. (47), and the holographic-noise scenario of Eq. (49) predict effects that increase at lower observational frequencies.Footnote 30 The outcome of such LISA quantum-spacetime-noise studies may then depend on issues such as the possible N γ suppression.

I should also stress that the analysis of these opportunities for quantum-spacetime phenomenology from experiments operating at low observational frequencies ω, is perhaps the most significant and most robust conceptual achievement of the sort of phenomenology of spacetime foam that I am discussing in this Section. When these pictures were first proposed it was seen by many as a total surprise that one could contemplate Planck-scale effects at frequencies of observation of only 100 Hz. The naive argument goes something like “Planck-scale-induced noise must be studied at Planck frequency”, and the Planck frequency is E p / ∼ 1043 Hz. However, in analyzing actual pictures of quantum-spacetime fuzziness, even the simple-minded ones described above, one becomes familiar with well-known facts establishing (and we should expect this lesson to apply even to more sophisticated picture of quantum-spacetime-induced fuzziness) that discrete fluctuation mechanisms tend to produce very significant effects at low observational frequencies ω, with typical behaviors of the type ω−|α|, even when their charateristic time scale is ultrashort.

4.2.5 Distance fuzzyness for atom interferometers

Since the phenomenology of the implications of spacetime foam for interferometry is at an early stage of development, at the present time it may be premature to enter into detailed discussions of what type of interferometry might be best suited for uncovering quantum-spacetime/Planck-scale effects. Accordingly, in Section 4.2 I focus by default on the simplest case of interferometric studies, the one using a laser-light beam. However, in recent times atom interferometry has reached equally astonishing levels of sensitivity and for several interferometric measurements it is presently the best choice. Laser-light interferometry is still preferred for certain well-established techniques of interferometric studies of spacetime observables, as in the case of searches for gravity waves, and the observations I reported above for the phenomenology of strain noise induced by quantum-spacetime effects appear to be closely linked to the issues encountered in the search for gravity waves. However, it seems plausible that soon there will be some atom-interferometry setups that are competitive for gravity-wave searches (see, e.g., Refs. [526, 208]). This in turn might imply that searches of quantum-spacetime-induced strain noise could rely on atom interferometry.

The alternative between light and matter interferometry might prove valuable at later more mature stages of this phenomenology. It is likely that different test theories will give different indications in this respect, so that atom interferometry might provide the tightest constraints on some spacetime-foam test theories, whereas laser-light interferometry might provide the best constraints on other spacetime-foam test theories. A key aspect of the description of Planck-scale effects for atom interferometry to be addressed by the test theories (and hopefully, some day, by some fully-developed quantum-spacetime/quantum-gravity theories) is the role played by the mass of the atoms. With respect to laser-light interferometry, the case of atom interferometry challenges us with at least two more variables to be controlled at the theory level, which are the mass of the atoms and their compositeness. How do these two aspects of atom interferometry interface with the quantum-spacetime features that are of interest here? Do they effectively turn out to introduce suppressions of the relevant effects or on the contrary could they be exploited to see the effects? For none of the quantum-spacetime models that are presently studied have we reached a level of understanding of physical implications robust enough for us to answer confidently these questions. Perhaps we should also worry about (or exploit) another feature that is in principle tunable in atom interferometry, which is the velocity of the particles in the beam.

4.3 Fuzziness for waves propagating over cosmological distances

Interferometric studies of spacetime-foam are another rare example of tests of quantum-spacetime effects that can be conducted in a controlled laboratory setup (also see Section 3.13). Astrophysics may turn into the most powerful arena for this type of study. Indeed, the studies I discussed in the previous Section 4.2, which started toward the end of the 1990s, inspired a few years later some follow-up studies from the astrophysics side. As should be expected, the main opportunities come from observations of waves that have propagated over very large distances, thereby possibly accumulating a significant collective effect of the fuzziness encountered along the way to our detectors.

4.3.1 Time spreading of signals

An implication of distance fuzziness that one should naturally consider for waves propagating over large distances is the possibility of “time spreading” of the signal: if at the source the signal only lasted a certain very short time, but the photons that compose the signal travel a large distance L, affected by uncertainty \(\sigma _L^2\), before reaching our detectors the observed spread of times of arrival might carry little trace of the original time spread at the source and be instead a manifestation of the quantum-gravity-induced σ L . If the distance L is affected by a quantum-spacetime uncertainty σ L then different photons composing the signal will effectively travel distances that are not all exactly given by L but actually differ from L and from each other up to an amount σ L .

Again, it is of particular interest to test laws of the type discussed in the previous Section 4.2, but it appears that these effects would be unobservably small even in the case that provides the strongest effects, which is the random-walk ansatz \(\sigma _L^2 \simeq {L_P}L \simeq {L_P}T\) (assuming ultrarelativistic particles, for which L is at least roughly equal to the time duration T of the journey). To see this let me consider once again gamma-ray bursts, which often travel for times on the order of 1017 s before reaching Earth detectors and are sometimes characterized by time structures (microbursts within the burst) that have durations as short as 10−3 s. Values of \(\sigma _L^2\) as small as \(\sigma _L^2 \sim {c^2}{10^{- 8}}{{\rm{s}}^2}\sigma _L^2 \sim {c^2}{10^{- 27}}{{\rm{s}}^2}\) could be noticeable in the analysis of such bursts. However, the estimate \(\sigma _L^2 \simeq c{L_p}T\) only provides \(\sigma _L^2\sim{c^2}{10^{- 27}}{{\rm{s}}^2}\) and is, therefore, much beyond our reach.

I shall comment in Section 4.8 on an alternative formulation of the phenomenology of quantum-spacetime-induced worldline fuzziness, the one in Ref. [490] inspired by the causal-set approach (the approach on which Section 4.8 focuses).

4.3.2 Fuzziness from nonsystematic symmetry-modification effects

As an alternative way to model spacetime fuzziness, there has been some interest [431, 72, 37] in the possibility that there might be effects resembling the ones discussed in Section 3, which are systematic deviations from the predictions of Poincaré symmetry, but are “nonsystematic” in the sense discussed at the beginning of this section. The possibility of fuzziness of particle worldlines governed by E/E p , mentioned in the previous Section 4.3.1, is an example of such nonsystematic violations of Poincaré symmetry.

These speculations are not on firm ground on the theory side, in the sense that there is not much in support of this among available results on actual analysis of formalizations of spacetime quantization. But it is legitimate to expect that this might be due to our limited abilities in mastering these complex formalisms. After all, as suggested in Ref. [431], if spacetime geometry is fuzzy then it may be inevitable for the operative procedures that give sense to the notion of energy and momentum of a particle to also be fuzzy.

This sort of picture could have tangible observational consequences. For example, it can inspire, as suggested in Refs. [74, 57, 72], scenarios such that spacetime fuzziness effectively produces an uncertainty in the velocity of particles of order E/E p . This would give rise to a magnitude of these nonsystematic effects comparable to the one discussed in Section 3.8 for the corresponding systematic effects. After a journey of ∼ 1017 s the acquired fuzziness of arrival times could be within the reach [74] of suitably arranged gamma-ray-burst studies. However, there is no significant effort to report here on establishing bounds following this strategy.

There are instead some studies of this phenomenological picture [431, 72, 37], which take as a starting point the possibility, discussed in Sections 3.4 and 3.5, of modifications of the dispersion relation leading to modifications of the threshold requirements for certain particle-production processes, such as the case of two incoming photons producing an outgoing electron-positron pair. Refs. [431, 72, 37] considered the possibility of a non-systematic quantum-spacetime-induced deformation of the dispersion relation, specifically the case in which the classical relation E2 = p2 + m2 still holds on average, but for a given particle with large momentum \(\vec p\), energy would be somewhere in the range of

$$\vert \vec p\vert + {{{m^2}} \over {2\vert \vec p\vert}} - {{\vert \eta \vert} \over 2}{{{{\vec p}^2}} \over {{E_p}}} \leq E \leq \vert \vec p\vert + {{{m^2}} \over {2\vert \vec p\vert}} + {{\vert \eta \vert} \over 2}{{{{\vec p}^2}} \over {{E_p}}},$$
(53)

with some (possibly Gaussian) probability distribution. A quantum-spacetime theory with this feature should be characterized by a fundamental value of η, but each given particle would satisfy a dispersion relation of the type

$$E \simeq \vert \vec p\vert + {{{m^2}} \over {2\vert \vec p\vert}} + {{\tilde \eta} \over 2}{{{{\vec p}^2}} \over {{E_p}}},$$
(54)

with − |η| ≤ ῆ ≤ |η|.

In analyses such as the one discussed in Section 3.4 (for observations of gamma rays from blazars) one would then consider electron-positron pair production in a head-on photon-photon collision assuming that one of the photons is very hard while the other is very soft. To leading order, for the soft photon only, the energy ϵ is significant (for an already small ϵ the actual value of will not matter in leading order). So, the soft photon can, in leading order, be treated as satisfying a classical dispersion relation. In a quantum-spacetime theory predicting such non-systematic effects, the hard photon would be characterized both by its energy E and its value of . In order to establish whether a collision between two such photons can produce an electron-positron pair, one should establish whether, for some admissible values of + and (the values of pertaining to the outgoing positron and the electron respectively), the conditions for energy-momentum conservation can be satisfied. The process will be allowed if

$$E \geq {{{m^2}} \over \epsilon} - {{\tilde \eta} \over 4}{{{E^3}} \over {\epsilon {E_p}}} + {{{{\tilde \eta}_ +} + {{\tilde \eta}_ -}} \over {16}}{{{E^3}} \over {\epsilon {E_p}}}.$$
(55)

Since , + and are bound by the range − |η| to |η|, the process is only allowed, independent of the value of , if the condition

$$E \geq {{{m^2}} \over \epsilon} - {{3\vert \eta \vert} \over 8}{{{E^3}} \over {\epsilon {E_p}}}$$
(56)

is satisfied. This condition defines the actual threshold in the non-systematic-effect scenario. Clearly, in this sense the threshold is inevitably decreased by the non-systematic effect. However, there is only a tiny chance that a given photon would have η = |η|, since this is the limiting case of the range allowed by the nonsystematic effect, and unless η = |η|, the process will still not be allowed even if

$$E \simeq {{{m^2}} \over \epsilon} - {{3\vert \eta \vert} \over 8}{{{E^3}} \over {\varepsilon {E_p}}}.$$
(57)

Moreover, even assuming = |η|, the energy value described by (57) will only be sufficient to create an electron positron pair with + = − |η| and = −|η|, which again are isolated points at the extremes of the relevant probability distributions. Therefore the process becomes possible at the energy level described by (57) but it remains extremely unlikely, strongly suppressed by the small probability that the values of , + and would satisfy the kinematical requirements.

With reasoning of this type, one can easily develop an intuition for the dependence on the energy E, for fixed value of ϵ (and treating , + and as totally unknown), of the likelihood that the pair-production process can occur: (i) when (56) is not satisfied the process is not allowed; (ii) as the value of E is increased above the value described by (57), pair production becomes less and less suppressed by the relevant probability distributions for , + and , but some suppression remains up to the value of E that satisfies

$$E \simeq {{{m^2}} \over \epsilon} + {{3\vert \eta \vert} \over 8}{{{E^3}} \over {\epsilon {E_p}}};$$
(58)

(iii) finally for energies E higher than the one described by (58), the process is kinematically allowed for all values of , + and , and, therefore, the likelihood of the process is just the same as in the classical-spacetime theory.

This describes a single photon-photon collision taking into account the nonsystematic effects. One should next consider that for a hard photon traveling toward our Earth detectors from a distant astrophysical source there are many opportunities to collide with soft photons with energy suitable for pair production to occur (the mean free path is much shorter than the distance between the source and the Earth). Thus, one expects [72, 37] that even a small probability of producing an electron-positron pair in a single collision would be sufficient to lead to the disappearance of the hard photon before reaching our detectors. The probability is small in a single collision with a soft background photon, but the fact that there are, during the long journey, many such pair-production opportunities renders it likely that in one of the many collisions the hard photon would indeed disappear into an electron-positron pair. Therefore, for this specific scheme of non-systematic effects it appears that a characteristic prediction is that the detection of such hard photons from distant astrophysical sources should start being significantly suppressed already at the energy level described by (57), which is below the threshold corresponding to the classical-spacetime kinematics.

It is interesting [57, 72, 74] to contemplate in this case the possibility that systematic and nonsystematic effects may both be present. It is not unnatural to arrange the framework in such a way that the systematic effects tend to give higher values of the threshold energy, but then the nonsystematic effects would allow (with however small probability) configurations below threshold to produce the electron-positron pair. And for very large propagation distances (very many “target soft photons” available) the nonsystematic effect can essentially erase [72] the systematic effect (no noticeable upward shift of the threshold).

I illustrated the implications of nonsystematic effects within a given scenario and specifically for the case of observations of gamma rays from blazars. One can implement the non-systematic effects in some alternative ways and the study of the observational implications can consider other contexts. In this respect I should bring to the attention of my readers the studies of non-systematic effects for ultra-high-energy cosmic rays reported in Refs. [37, 310, 106].

Combinations of systematic and nonsystematic effects can also be relevant [57, 74] for studies of the correlations between times of arrival and energy of simultaneously-emitted particles. For that type of study both the systematic and the nonsystematic effects could leave an observable trace [74] in the data, codified in the mean arrival time and the standard deviation of arrival times found in different energy channels.

4.3.3 Blurring images of distant sources

The two examples of studies in astrophysics of quantum-spacetime-induced distance fuzziness I discuss in Sections 4.3.1 and 4.3.2 have only been moderately popular. I have left as last the most intensely studied opportunity in astrophysics for quantum-spacetime-induced distance fuzziness. These are studies essentially looking for effects blurring the images of distant sources.

It is interesting that these studies were started by Ref. [367], which cleverly combined some aspects of Refs. [51, 54, 53, 433], providing the main concepts for the proposal summarized in Section 4.2, and some aspects of Ref. [66], providing the main concepts for the proposal summarized in Section 3.8. Ref. [367] was interested in the same phenomenology of distance fuzziness introduced and analyzed for controlled interferometers in Refs. [51, 54, 53, 433], but looked for opportunities to perform analogous tests using the whole Universe as laboratory, in the sense first introduced in Ref. [66].

Gradually over the last decade this became a rather active research area, as illustrated by the studies reported in Refs. [367, 434, 189, 464, 363, 171, 167, 402, 514, 403, 404, 520, 456, 405].

The phenomenological idea is powerfully simple: effects of quantum-spacetime-induced space-time fuzziness had been shown [51, 54, 53, 433] to be potentially relevant for LIGO/VIRGO-like (and LISA-like) intereferometers, exploiting not only the distance-monitoring accuracy of those interferometers, but also the fact that such accuracy on distance monitoring is achieved for rather large terrestrial distances. Essentially the Universe gives much larger distances for us to monitor [66], and although we can monitor them with accuracy inferior to the one of a LIGO/VIRGO-like intereferometer, on balance, the astrophysics route may also be advantageous also for studies of quantum-spacetime-induced spacetime fuzziness [367].

As for Refs. [51, 54, 53, 433], reviewed in Section 4.2, the core intuition here is that the quantum-spacetime contribution to the fuzziness of a particle’s worldline might grow with propagation distance. And collecting the scenarios summarized in Eqs. (47), (49), and (50), one arrives at a one-parameter family of phenomenological ansätze for the characterization of this dependence of fuzziness on distance

$$\sigma _L^2 \simeq L_P^{2\alpha}{L^{2 - 2\alpha}},$$
(59)

with 1/2 ≤ α ≤ 1.

An assumption shared by most explorations [367, 434, 189, 464, 363, 171, 167, 402, 514, 403, 404, 520, 456, 405] of this phenomenological avenue is that from Eq. (59), that there would also follow an associated uncertainty in the specification of momenta

$$\delta E\sim L_P^\alpha \;{E^{1 + \alpha}},\quad \;\;\delta p\sim L_P^\alpha \;{p^{1 + \alpha}}.$$
(60)

I must stress that this (however plausible) deduction of the heuristic arguments has not been confirmed in any explicit model of spacetime quantization. And it plays a crucial role in most astrophysics tests of distance fuzziness: from Eq. (60) it is easy to see [367] that it follows that (assuming a classical-wave description is still admissible when such effects are nonnegligible) there should be a mismatch between uncertainty in the group velocity and in the phase velocity of a classical wave, and this in turn proves to be a very powerful tool for the phenomenology. During a propagation time T = L/υ g (υ g being the group velocity) the phase of a wave advances by Δϕ = 2π(υ p /υ g )(L/λ) (where υ p is the phase velocity and λ is the wavelength). There are two schools of intuition concerning how quantitatively spacetime fuzziness should scramble the phase of a wave. According to Ref. [367] and followers the effect should go as

$$\delta \phi \simeq {{{E^\alpha}} \over {E_p^\alpha}}{L \over \lambda} = {{{E^{1 + \alpha}}} \over {E_p^\alpha}}L,$$
(61)

whereas according to Ref. [434, 171] and followers, the effect should grow more slowly with the distance of propagation, going like

$$\delta \phi \simeq {E \over {E_p^\alpha}}{L^{1 - \alpha}}.$$
(62)

As first observed in Ref. [514], the alternative formulas (61) and (62) should be improved to account for redshift. For the case of Eq. (62) Ref. [514] proposes the following

$$\delta \phi \sim {E \over {E_p^\alpha}}{{1 - \alpha} \over {{H_0}{q_0}}}\int\nolimits_0^\infty {dz(1 + z){L^{- \alpha}}\left({1 - {{1 - {q_0}} \over {\sqrt {1 + 2z{q_0}}}}} \right)},$$
(63)

where q0 is the decelaration parameter, \({q_0} = {\Omega _0}/2 - \Lambda/(3H_0^2)\) and L is the luminosity distance, \(L = [z{q_0} + (1 - {q_0})(1 - \sqrt {1 + 2z{q_0}})]/({H_0}q_0^2)\) (Λ, H0 and Ω0 here denote, as usual, respectively the cosmological constant, the Hubble parameter and the matter fraction).

Evidently, this phenomenology still has a few too many quantitative details subject to further scrutiny and a few too many alternative scenarios. This is the result of the fact that work on actual formalizations of spacetime quantization, while encouraging the general intuitive picture, has been unable to provide detailed guidance. And the heuristic arguments based on these preliminary studies have been unable to narrow the range of possibilities. But pursuing this path further appears to be an exciting opportunity for quantum-spacetime phenomenology, and we should, therefore, persevere. In particular, based on the (however alternative) estimates given by Eqs. (61), (62), and (63), several authors (see, e.g., Refs. [171, 514, 520]) have concluded that a phenomenology based on blurring of the images of distant sources can provide Planck-scale sensitivity for a rather broad range of possible phenomenological test theories and for values of α significantly greater than 1/2, possibly [520] going all the way up to values of α close to 1. In Ref. [514] one even finds a preliminary data analysis suggesting that for observations of quasars there might be a trend towards lower observed Strehl ratios with increasing redshift, which would provide encouragement for the hope of discovering quantum-spacetime-induced image blurring.

The main opportunities appear to be provided by observations of distant quasars [171, 514, 520], whose dimensions are small and are rather abundantly observed at high redshift.

4.4 Planck-scale modifications of CPT symmetry and neutral-meson studies

Investigations of spacetime symmetries and distance fuzziness are evidently relevant for some of the core features of the idea of spacetime quantization. My next task concerns CPT-symmetry tests, and the possibility that indirectly some scenarios for the quantization of spacetime might affect CPT symmetry.

A complication, but also an opportunity, for quantum-spacetime-motivated tests of CPT symmetries comes from the fact that CPT symmetry should be and is tested independently of the quantum-spacetime motivation. From this perspective the situation is somewhat analogous to that discussed earlier concerning quantum-spacetime-motivated tests of Lorentz symmetry. The quantum spacetime literature can provide special motivation for probing CPT symmetry in certain specific ways, but there is already plenty of motivation, even without quantum-spacetime research, for testing CPT symmetry as broadly as possible [389, 439, 234].

Also, in this case, the Standard Model Extension provides a much appreciated and widely adopted formalization, finding a good balance between the desire of searching for violations of CPT symmetry (and/or, as mentioned, violations of Lorentz symmetry) within the confines of quantum field theory but allowing for both effects that have been discussed from the quantum-spacetime perspective and effects for which so far there is no quantum-spacetime motivation. I shall focus here on the hypothesis of quantum-spacetime-induced and Planck-scale-magnitude CPT violation effects, so I shall not review the broad subject of CPT violation within the Standard Model Extension, for which readers can find valuable reviews and perspectives in Refs. [345, 117, 180, 339, 299, 341, 346] (see also parts of Ref. [395]).

Another issue that should always be kept in mind in relation to CPT symmetry is the fact that it can be derived as a theorem for local quantum field theories with Lorentz invariance. In approaches based on local field theory, it is natural to perform combined studies of CPT and Lorentz symmetry.Footnote 31 However, the notion of spacetime quantization at the Planck scale involves some aspects of nonlocality (at least the notion of points that coincide with accuracy better than the Planck length is typically abandoned) and in most quantum-spacetime studies of the fate of CPT symmetry the expectation is that these aspects of non-locality may be primarily responsible for the conjectured violations of CPT symmetry.

I shall not attempt to summarize here the results on violations of CPT symmetry arising from spacetime quantization not introduced at the Planck scale (but rather at some much lower scale), for which readers can find valuable starting points to the related literature in Refs. [162, 43, 417, 496, 28] and references therein.

Consistent with the scope of this review, I shall focus exclusively on scenarios for violations of CPT symmetry based on nonclassicality (“quantization”) of spacetime introduced at the Planck scale. As a result of some technical challenges, mentioned in Section 2.2.2, this literature can only rely on preliminary theory results, but does suggest convincingly that Planck-scale sensitivity to quantum-spacetime-induced violations of CPT symmetry is within our reach.

4.4.1 Broken-CPT effects from Liouville strings

In the case of the test of CPT symmetry it is easier for me to start by discussing the availability of Planck-scale sensitivity, postponing briefly some comments on test theories based on the idea of spacetime quantization.

There is a sizable literature establishing that CPT symmetry can be tested with Planck-scale sensitivity in the neutral-kaon and the neutral-B systems (see, e.g., Refs. [219, 220, 298, 108]). It turns out that in these neutral-meson systems there are plenty of opportunities for Planck-scale departures from CPT symmetry to be amplified. In particular, the neutral-kaon system hosts the peculiarly small mass difference between long-lived and short-lived kaons |M L M S |/M L,S ∼ 7·10−15 and other small numbers naturally show up in the analysis of the system, such as the ratio |Γ L − Γ S |/M L,S ∼ 1.4 · 10−14. And for certain types of departures from CPT symmetry the inverse of one of these small numbers amplifies the small CPT-violation effect [219, 220, 298, 108]. In particular, this mechanism turns out to provide sufficient amplification for Planck-scale effects, inducing a difference of order \(M_{{K^0}}^2/{E_p}\) between the terms on the diagonal of the \({K^0},{\bar K^0}\) mass matrix (exact classical CPT symmetry would require the terms on the diagonal to be identical). It should be noticed that \({M_{{K^0}}}/{E_p} \sim {10^{- 19}}\), which is not overwhelmingly smaller than |M L M S |/M L,S ∼ 7·10−15.

A much studied quantum-spacetime description of violations of CPT symmetry is centered on the mentioned Liouville-strings approach [221, 220], particularly with its description of spacetime foam and its non-classical description of time, involving a non-trivial role for the Liouville field [224]. This model is, in particular, the reference for the analysis of Planck-scale limits on quantum-spacetime-induced CPT violation reported by the CPLEAR collaboration on the basis of studies of neutral kaons [13] (also see the related results reported using neutral-kaon data gathered at the particle-physics laboratory in Frascati [13, 534, 205]). Interestingly, the Liouville-string model hosts both departures from CPT symmetry and decoherence, and I find it most convenient to discuss it in the later part of this section devoted to decoherence studies.

Let me highlight a recent development that is in part inspired by these Liouville-string studies. It was recently observed (primarily in Refs. [112, 113]) that quantum-spacetime scenarios with violations of CPT symmetry might also require some corresponding modifications of the recipe for obtaining multiparticle states from single-particle states for identical particles. This may apply in particular to the neutral-kaon \({K_0} - {\bar K_0}\) system, since standard CPT transformations take K0 into \({\bar K_0}\) but violations of CPT symmetry are likely to also induce a modification of the link between K0 and \({\bar K_0}\).

Refs. [112, 113] proposed a phenomenology inspired by this argument and based on the following parametrization of the state |i > initially produced by a ϕ-meson decay:

$$\vert i > \propto ((\vert {K_S}(p),{K_L}(- p) > - \vert {K_L}(p),{K_S}(- p) >) + \omega (\vert {K_S}(p),{K_S}(- p) > - \vert {K_L}(p),{K_L}(- p) >)),$$
(64)

where the complex parameter ω essentially characterizes the level of contamination of the state |i > by the (otherwise unexpected) C-even component |K S (p), K S (− p) > − |K L (p), K L (− p) >.

Stringent constraints on ω can be placed by performing measurements of the chain of processes ϕ → KKXY, in which first the ϕ meson decays into a pair of neutral kaons and then one of the kaons decays at time t1 into a final state X, while the other kaon decays at time t2 into a final state Y. By following this strategy the KLOE experiment [534, 522] at DAΦNE is setting [204, 205] experimental limits on ω at the level 10−3 (|Re(ω)| < 10−3, |Im(ω)| < 10−3).

It is not easy at present to establish robustly what level of sensitivity to ω could really amount to Planck-scale sensitivity, but it is noteworthy that there are semi-quantitative/semi-heuristic estimates based on a certain intuition for spacetime foam suggesting [112, 113, 398] that sensitivities in the neighborhood of ω ∼ 10−3, ω ∼ 10−4 could already be significant.

4.4.2 Departures from classical CPT symmetry from spacetime noncommutativity at the Planck scale

Another formalism for spacetime quantization at the Planck scale where violations of CPT symmetry have been discussed to some extent is “κ-Minkowski spacetime noncommutativity” [391, 374, 70]. A first hint that this might be appropriate comes from the fact that the κ-Minkowski formalism is one of those providing support for the possibility of modifications of the dispersion relation of the form \({m^2} \simeq {E^2} - {\bar p^2} + \lambda E{\bar p^2}/2\), with λ on the order of the Planck length. It may be relevant for the relation between particles and antiparticles (for which CPT symmetry is a crucial player) that for the values of E allowed by the dispersion relation for given \(\vert \vec p\vert\) one does not recover the ordinary result (with its traditional two solutions of equal magnitude and opposite sign); instead, one finds that the two solutions E+, E are given by

$${E_ \pm} \simeq - {\lambda \over 2}{\vec p^2} \pm \sqrt {{m^2} + {{\vec p}^2}}.$$
(65)

The fact that the solutions E+ and E are not exactly opposite may suggest that one should make room for a mismatch δM of the terms on the diagonal of the \({K^0},{\bar K^0}\) mass matrix, of order

$$\vert \delta M\vert \;\sim {{\vert {E_ +}\vert - \vert {E_ -}\vert} \over {\vert {E_ +}\vert + \vert {E_ -}\vert}}2M \simeq \lambda {{{{\vec p}^2}M} \over {\sqrt {{M^2} + {{\vec p}^2}}}}.$$
(66)

The most significant feature of this description of δM is its momentum dependence, and, for given |λ|, |δM| is an increasing function of \(\vert \vec p\vert\), quadratic in the non-relativistic limit and linear in the ultra-relativistic limit. Therefore, among experiments achieving comparable δM sensitivity the ones studying more energetic kaons are going to lead to more stringent bounds on λ.

Considering that, as mentioned, neutral-kaon experiments at Φ factories are now sensitive at the level δM ∼ 10−18 GeV, one infers a sensitivity to this type of candidate quantum-gravity effect that, for kaons of momenta of about 110 MeV (at the ϕ resonance), corresponds to a sensitivity to values of |λ| around 10−32 m, i.e., not far (just 3 orders of magnitude away) from the Planck scale. Because of the premium on high momenta of this scenario, better limits could be set using experiments with high-momentum kaons Fermilab’s E731 [554, 450]. And studies with neutral B mesons of relatively high momenta could also be valuable from this perspective.

However, we are at a very early stage of understanding of the fate of CPT symmetry in these spacetimes with quantization at the Planck scale. Specifically, for the case of κ-Minkowski space-time, analyses such as the one in Ref. [70] suggest that CPT symmetry is deformed rather than broken/lost. Indeed, in κ-Minkowski the anomalies one can presently preliminarily see for CPT symmetry are all linked to the peculiarity of P-parity transformations. It appears that in κ-Minkowski P-parity transformations for momenta should not take a momentum \(\vec p\) into \(- \vec p\), but rather \(\vec p \rightarrow \ominus \vec p\), where \(\ominus \vec p\) denotes the “antipode operation”: \(\ominus \vec p \equiv - \vec p{e^{- \lambda {p_0}}}\) (where λ denotes again the κ-Minkowski noncommutativity length scale).

4.5 Decoherence studies with kaons and atoms

4.5.1 Spacetime foam as decoherence effects and the “α, β, γ test theory”

As stressed earlier in this review the idea of “spacetime foam” appears to appeal to everyone involved in quantum-spacetime research, but this is in part due to the fact that this idea is not really well defined, not by the qualitative intuitive picture proposed by Wheeler. In order to set up a phenomenology for effects induced by this spacetime foam, it is necessary to provide for it physical/experimentally-meaningful characterizations. I already discussed one possible such characterization, given in terms of distance fuzziness and associated strain noise for interferometry. Another attempt to physically characterize spacetime foam can be found in Refs. [220, 221] (other valuable perspectives on this subject can be found in Refs. [108, 251]), focusing on the possibility that the rich dynamical properties of spacetime foam might act as a decoherence-inducing environment.

The main focus of Refs. [220, 221] has been the neutral-kaon system, whose remarkably delicate balance of scales provides opportunities not only for very sensitive tests of CPT symmetry, but also for very sensitive tests of decoherence. Refs. [220, 221] essentially propose a test theory, based on the mentioned Liouville-strings idea, for spacetime-foam-induced decoherence in the neutral-kaon system. This test theory adopts the formalism of density matrices and is centered on the following evolution equation for the neutral-kaon reduced density matrix ρ:

$${\partial _t}\rho = i[\rho, H] + \delta H\;\rho,$$
(67)

where H is an ordinary-quantum-mechanics Hamiltonian and δH mn (with indices m, n running from 1 to 4: {m,n} ∈ {1,2,3,4}) is the spacetime-foam-induced decoherence matrix, taken to be such that δH1n = δH2n = δHn1 = δHn2 = 0, while δH34 = δH43 = −2β, δH33 = − 2α, and δH44 = −2γ. Therefore, the test theory is fully specified upon fixing H and giving some definite values to the parameters α, β, γ.

It should be stressed that this test theory necessarily violates CPT symmetry whenever δH ≠ 0. Additional CPT violating features may be introduced in the ordinary-quantum-mechanics Hamiltonian H, by allowing for differences in masses and/or differences in widths between particles and antiparticles. Therefore, this test theory is an example of a framework that could be used in a phenomenology looking simultaneously for departures from CPT symmetry of types admissible within ordinary quantum mechanics and for departures from CPT symmetry that require going beyond quantum mechanics (by allowing for decoherence). It is noteworthy that the two types of CPT violation (within and beyond quantum mechanics) can be distinguished experimentally.

Concerning more directly decoherence, various characterizations of the effects of this test theory have been provided, and in particular a valuable description of how significant the decoherence effects are (depending on the values given to α, β, γ) is found looking at how the rate of kaon decay into a pair of pions, R2π, evolves as a function of time. This time evolution will in general take the form

$${R_{2\pi}}(t) = {C_S}{e^{- {\Gamma _S}t}} + {C_L}{e^{- {\Gamma _L}t}} + 2{C_I}{e^{- ({\Gamma _L} + {\Gamma _S})t/2}}\cos [({m_L} - {m_S})t - \phi ],$$
(68)

where the indices S, L, I stand respectively for short-lived, long-lived, interference, and the combination \(\varsigma \equiv 1 - {C_I}/{\sqrt {{C_S}C_L}}\) provides a good phenomenological characterization of the amount of decoherence induced in the system [398].

Using data gathered by the CPLEAR experiment [13], one can set bounds on α, β, γ at the levels α ∼ 10−17 GeV, β ∼ 10−19 GeV, and γ ∼ 10−21 GeV. A comparable limit on γ has been placed by DAΦNE’s KLOE experiment, and in that case the analysis was based [398, 534, 205] on entangled kaon states.

I should stress that this is clearly a quantum-spacetime picture (at least in as much as it models spacetime foam) and the objective of the associated research program is to introduce quantum/foamy properties of spacetime at the Planck scale, but it is at present still unclear which levels of sensitivity to α, β, γ would correspond to foaminess of spacetime at the Planck scale. We are still unable to perform a derivation starting from foaminess at the Planck scale and deriving corresponding values for α, β, γ. It is nonetheless encouraging that the present experimental limits on these (dimensionful) parameters are in a neighborhood of the Planck-scale-inspired quantification m K /E p ∼ 10−19 (but it should be noticed that as much “Planck-scale inspiration” should be attributed, for example, to the scale \(m_K^2/E_p^2 \sim {10^{- 38}}\)).

4.5.2 Other descriptions of foam-induced decoherence for matter interferometry

Another attempt to characterize spacetime foam as a decoherence-inducing medium was developed by Percival and collaborators (see, e.g., Refs. [452, 453, 454]). This approach assumes that ordinary quantum systems should all be treated as open systems due to neglecting the degrees of freedom of the spacetime foam, but, rather than a formalization using density matrices, Refs. [452, 453, 454] adopt a formalism in which an open quantum system is represented by a pure state diffusing in Hilbert space. The dynamics of such states is formulated in terms of “Primary state diffusion”, an alternative to quantum theory with only one free parameter, a time scale τ0, which one can set to be the Planck time L p /c.

One way to charaterize τ0 is through a formula for the proper time interval for a timelike segment, which is given by [454]

$$\Delta s \simeq \vert \Delta \xi (x){\vert ^2} + \Delta \xi (x)\sqrt {{\tau _0}},$$
(69)

where Δξ(x) are point-dependent fluctuations induced by the foaminess/quantization of spacetime, which are modelled within the proposed theory.

A key characteristic of this picture would be [454] a suppression of the interference pattern for interferometers using beams of massive particles (such that the original beam is first split and then reunited to seek an interference pattern). The suppression increases with the mass of the particles, so it could more easily be tested with atom interferometers (rather than neutron interferometers). Unfortunately, a realistic analysis of an interferometer in the relevant primary-state-diffusion formalism is much beyond the level of answers one is (at least presently) able to extract from the primary-state-diffusion setup. Ref. [454] considered resorting to some simple-minded simplifications, including the assumption that the Hamiltonian be given by the mass together with projectors onto the wave packets in the arms of the interferometer, neglecting the kinetic-energy terms. Within such simplifications one does find that values of τ0 at or even a few orders of magnitude below the Planck time would leave an observably large trace in modern atom interferometers. However, these simplifications amount to a model of the interferometer that is much too crude (as acknowledged by the authors themselves [454]) and this does not allow us to meaningfully explore the possibility of genuine Planck-scale sensitivities being achieved by this strategy. Note that by taking τ0 as the Planck time it is not obvious that the effects are being introduced genuinely at the Planck scale, since the nature of the effects is characterized not only by τ0 but also by other aspects of the framework, such as the description of the fluctuations. Moreover, even if all other aspects of the picture were understood, the crudity of the model used for matter interferometers would still not allow us to investigate the Planck-scale-sensitivity issue.

Recently, Ref. [498] and Ref. [541] presented somewhat different pictures of quantum-gravity-induced decoherence for atom interferometers. Several aspects of the Percival setup are maintained but different interpretations are applied in some aspects of the analysis. For example, Ref. [541] removes some of the assumptions adopted by Percival and collaborators, particularly in relation to the description of the “quantum fluctuations” of the metric, and proposes an estimate of the amount of suppression of the interference pattern,Footnote 32 that is perhaps more intriguing from a phenomenology perspective, since it would suggest that the effect is just beyond present sensitivities (but within the reach of sensitivities achievable by atom interferometers in the not-so-distant future). For these recent proposals one is still (for reasons analogous to these just discussed for the Percival approach) unable to meaningfully explore the issue of “genuine Planck-scale sensitivity”, but they may represent a step in the direction of a more detailed description of spacetime foam, if intended as fluctuations of the metric.

4.6 Decoherence and neutrino oscillations

The observations briefly discussed in the previous Section 4.5 that are relevant for the study of manifestations of foam-induced decoherence in some laboratory experiments (neutral-meson studies, atom interferometers) can very naturally be applied to neutrino astrophysics as well, as discussed in Ref. [400] and references therein (see also Refs. [109, 23, 422, 241]). Also in the neutrino context it is natural to attempt to develop test theories codifying the intuition that spacetime foam may act as an environment, so that neutrino observations would have to be analyzed considering the relevant neutrino system as an open system. And the evolution of the neutrino density matrix could be described (in the same sense as the description in Eq. (67) for neutral-meson systems) by an evolution equation of the type

$${\partial _t}\rho = i[\rho, H] + \delta H\;\rho.$$
(70)

It is argued in Ref. [400] that such a formalization of the effects of spacetime foam should generate a contribution to the mass difference between different netrinos, and could give rise to neutrino oscillations constituting a “gravitational MSW effect”.

As an alternative to the setup of Eq. (70) one could consider [400, 401] the possibility of random (Gaussian) fluctuations of the background spacetime metric over which the neutrinos propagates. For the random metric one can take [400, 401] a formalization of the type

$${g^{\mu \nu}} = \left({\begin{array}{*{20}c} {- {{({a_1} + 1)}^2} + a_2^2 - {a_3}({a_1} + 1) + {a_2}({a_4} + 1)} \\ {- {a_3}({a_1} + 1) + {a_2}({a_4} + 1) - a_3^2 + {{({a_4} + 1)}^2}} \\ \end{array}} \right)$$
(71)

and enforce [400, 401] for the random Gaussian variables ai a parametrization based on parameters σ i (one per a i ) such that 〈α i 〉 = 0 and 〈a i a j 〉 = δj ij σ i . These fluctuations of the metric are found [400, 401] to induce decoherence even when the neutrinos are assumed to evolve according to a standard Hamiltonian setup,

$${\partial _t}\rho = i[\rho, H].$$
(72)

But the decoherence effects generated in this framework with standard Hamiltonian evolution in a nonstandard (randomly-fluctuating) metric, are significantly different from the ones generated with the nonstandard evolution equation (70) in a standard classical metric. In particular, in both cases one obtains neutrino-transition probabilities with decoherence-induced exponential damping factors in front of the oscillatory terms, but in the framework with evolution equation (70) the scaling with the oscillation length (time) is naturally linear [400, 401], whereas when adopting standard Hamiltonian evolution in a fluctuating metric it is natural [400, 401] to have quadratic scaling with the oscillation length (time).

The growing evidence for ordinary-physics neutrino oscillations, which one expects to be much more significant than the foam-induced ones, provides a formidable challenge for the phenomenology based on these test theories for foam-induced decoherence in the neutrino sector. Some preliminary ideas on how to overcome this difficulty are described in Ref. [400]. From the strict quantum-spacetime-phenomenology perspective of requiring one to establish that the relevant measurements could be sensitive to effects introduced genuinely at the Planck scale, these neutrino-decoherence test theories must face challenges already discussed for a few other test theories: there is at present no rigorous/constructive derivation of the values of the parameters of these test theories from a description (be it a full quantum-spacetime theory or simply a toy model) of effects introduced genuinely at the Planck scale, so one can only express these parameters in terms of the Planck scale using some dimensional-analysis arguments.

4.7 Planck-scale violations of the Pauli Exclusion Principle

A case for Planck-scale sensitivity was recently made [97, 99] for the hypothesis of possible violations of the Pauli Exclusion Principle. This has still not been metabolized by an appreciably-wide quantum-gravity community, but it certainly deserves to be highlighted briefly in this review, since the chances for gradually gaining a strong impact on quantum-spacetime phenomenology are rather high.

As observed already a few times in this review, the spin-statistics theorem assumes a classical spacetime with ordinary locality. Therefore, it is legitimate to speculate that small departures from the implications of the spin-statistics theorem may arise in a quantum spacetime. Some earlier suggestions that this might be the case can be found, e.g., in Refs. [98, 163, 86], but the setup then was not such that one could see an emerging case for Planck-scale sensitivity.

The recent studies reported in Refs. [97, 99] investigated this issues assuming the specific form of spacetime noncommutativity given by

$$[{x_i},{x_j}] = 0,\quad \quad [{x_0},{x_i}] = i\chi {\epsilon _{kij}}{n_k}{x_j},$$
(73)

where n k are the components of a fixed spatial unit vector and the deformation length scale χ can be taken to be on the order of the Planck length.

It is rather easy to show that this form of noncommutativity imposes a corresponding modification of the “flip operator”, i.e., the operator that is used for symmetrization (anti-symmetrization) purposes in the commutative-spacetime case. In turn this gives rise to a deformed description of bosons and fermions. And the end result is that certain transitions that would be Pauli-forbidden in a commutative spacetime are actually allowed, although at a small rate (suppressed by the smallness of χ).

Computing these rates on the basis of Eq. (73) is at present only possible by relying on an uncomfortable number of simplifying assumptions [97, 99], but the outcome is nonetheless intriguing since it suggests that sensitivity to values of χ on the order of the Planck length is within reach. This exploits the high sensitivity toward possible violations of the Pauli Exclusion Principle at ongoing experiments, such as Borexino [107] and VIP [105].

4.8 Phenomenology inspired by causal sets

Most of the quantum-spacetime phenomenology of this past decade has been inspired by results on spacetime noncommutativity and/or LQG. But several other approaches are getting closer to inspiring phenomenological programs. I share the view of many quantum-spacetime phenomenologists who are looking at the approach based on causal dynamical triangulations [45, 371, 46, 47, 372, 49] as a maturing opportunity for inspiring the phenomenology work. And first indications are coming from the “asymptotic safety approach” [544, 466, 212, 469, 468], on which I shall comment in relation to a tangible proposal for phenomenology later in this review. Certainly in recent years we have seen a blossoming phenomenology emerging from the the causal-set program.

I place here an aside on this recent phenomenology inspired by the causal-set program, which also allows me to return, from a different perspective, on the important subject of non-systematic effects, already briefly discussed in Section 4.3.2. Indeed, because of the perspective that guides that research program, most (if not all) new effects predicted within the causal set program will be of non-systematic type.

Causal sets are a discretization of spacetime that allows the symmetries of GR to be preserved in the continuum approximation [131, 470, 284]. And causal sets can be used to construct simple models suitable for exploring possible manifestations of fuzziness of quantum spacetime. Moreover, the causal set proposal has recently been combined with the loop representation to formulate “causal spin foams” [392], thereby establishing a link to an already mature source of inspiration for quantum-spacetime phenomenology.

Clearly, some of the manifestations one must expect from a causal-set setup fall within the class of phenomena already briefly described in Section 4.3.1: at a coarse-grained level of analysis a causal-set background should introduce an intrinsic limitation to the accuracy of lengths and durations. Several recent works were aimed at formalizing and modeling these aspects of fuzziness for propagation [311, 312, 215]. The preliminary indications that are emerging appear to suggest that, if discreteness is indeed introduced at the Planck scale, the effects are very soft (hard to detect). Nonetheless we do already have a few examples of studies aiming for tangible predictions to be compared to actual data: for example, Ref. [490] reports a causal-set-inspired analysis of possible fuzziness of arrival times (the sort of effects already discussed in Section 4.3.1), relevant for studies conducted by gamma-ray telescopes.

An intriguing effect of random fluctuations in photon polarization can also be motivated by the causal-set framework [186]. The presently-available models of this causal-set-induced effect are to be viewed as very crude/preliminary, particularly since the present understanding of the framework is still not at the point of providing a definite model of photons propagating on a causal set background (from which one could derive the polarization-fluctuation feature). Still, this appears a very promising direction, especially since experimental information on CMB polarization is improving quickly and will keep improving in the coming years.

Presently the most tangible phenomenological plans inspired by the causal-set framework revolve around an effect [214, 458] of Lorentz invariant diffusion in the 4-momentum of massive particles. This is an Ornstein-Uhlenbeck process, a diffusion process on the mass-shell that results in a stochastic evolution in spacetime. An intuitive picture for this mechanism was given in Ref. [214], by considering a classical particle, of mass m propagating on a random spacetime lattice. The particle would then be constrained to move from point to point, but the discretization is such that in order to “reach the next point” (remaining on the lattice) the particle must ‘swerve’ slightly, also adapting its velocity υ to the swerving (also see Ref. [457] for a comparison of possible variants of the description of particle propagation in causal-set theory). The change in velocity amounts to the particle jumping to a different point on its mass shell. The net result of this swerving is that [214, 316, 396] a collection of particles initially with an energy-momentum distribution ρ(p) will diffuse in momentum space along their mass shell according to the equation

$${{\partial \rho} \over {\partial \tau}} = \mathcal{K}\nabla _\mathcal{P}^2\rho - {1 \over m}{p^\mu}{\partial _\mu}\rho,$$
(74)

where [214, 316, 396] Κ is the diffusion constant, \(\nabla _{\mathcal P}^2\), is the Laplacian in momentum space on the mass shell of the particle, τ is the proper time, and μ is an ordinary spacetime derivative.

The tightest limit on Κ is Κ < 10−61 GeV3, and was obtained [316] from limits on the amount of relic-neutrino contribution to hot dark matter. This follows from the observation that energy on the mass shell is bound from below by the mass, so that particles close to rest, when swerving, can essentially only increase their energy.

Interestingly these Κ-governed effects can also be relevant [396] for some of the phenomenology already discussed in this review, concerning the threshold requirements for certain particle-physics processes. Essentially the expected implications of swerving for these threshold analyses is similar to that already discussed in Section 4.3.2: in any given opportunity of interaction between a hard photon and a soft photon the swerving can effectively raise or lower (from the perspective of the asymptotically-incoming states) the threshold requirements for pion production. However, it appears that the bound Κ < 10−61 GeV3, if applicable also to protons,Footnote 33 brings the magnitude of such effects safely beyond the reach [396] of ongoing cosmic-ray studies.

4.9 Tests of the equivalence principle

4.9.1 Aside on tests of the equivalence principle in the semiclassical-gravity limit

I am focusing in this review on tests motivated by (and on effects modeled within) proposals of spacetime quantization at the Planck scale, but concerning tests of the equivalence principle inspired by quantum-spacetime models there is some merit in making a small digression on tests of the equivalence principle in the semiclassical limit of quantum gravity (where, by construction, no quantum-spacetime effects could be seen). This will allow me to compellingly set-up the issue of testing the equivalence principle from a general quantum-gravity perspective and specifically from the perspective of spacetime quantization at the Planck scale.

As already discussed briefly in Section 1, there is a long tradition of phenomenological studies, concerning the semiclassical-gravity limit, based on a “gravity version” of the Schrödinger equation of the form

$$\left[ {- \left({{1 \over {2{M_I}}}} \right){{\vec \nabla}^2} + {M_G}\phi (\vec r)} \right]\psi (t,\vec r) = i{{\partial \psi (t,\vec r)} \over {\partial t}},$$
(75)

describing the dynamics of matter (with wave function \(\psi (t,\vec r)\), inertial mass M I and gravitational mass M G ) in an external gravitational potential \(\phi (\vec r)\). Some of the most noteworthy results obtained within this framework are the interferometric studies of the type first set up by Colella, Overhauser and Werner [177], which establish that the Earth’s gravitational field is strong enough to affect the evolution of the wave function ψ in an observably-large manner, and the more recent evidence [428] that ultracold neutrons falling towards a horizontal mirror do form gravitational quantum bound states.

Of relevance here is the fact that some of the issues that have been most extensively considered by researchers involved in these studies concern the equivalence principle. This is signaled by the adoption of separate notation for inertial and gravitational mass in Eq. (75). In principle the gravitational mass M G governs the accrual of gravity-induced phases, while the inertial mass M I intervenes in determining the ratio between wave vector and velocity vector in the Galilean limit (pm). And even for M G = M I the mass does not factor out of the free-fall evolution of the quantum state (but for M G = M I one at least recovers [535] a complete identification between the effects of gravitation and the effects of acceleration).

Besides neutrons, these studies can also be performed with atoms [14]. And, interestingly, one can also perform rather similar analyses in studying neutrino oscillations, finding (see, e.g., Refs. [252, 272, 148] and references therein), that gravity may induce neutrino oscillations if different neutrino flavors are coupled differently to the gravitational field, thereby violating the equivalence principle.

4.9.2 On the equivalence principle in quantum spacetime

Evidently, searches of possible violations of the equivalence principle in the semiclassical-gravity limit of quantum gravity have significant intrinsic interest. And some of these tests of the equivalence principle in semiclassical-gravity limit also find explicit motivation in approaches to the study of the full quantum-gravity problem: most notably the string-theory-inspired studies reported in Refs. [521, 195, 196, 194, 193, 192], and references therein, predict violations of the equivalence principle in the semiclassical-gravity limit.

Returning to the main subject of this review, I should stress that the idea of spacetime quantization at the Planck scale provides a particularly crisp motivation for testing the equivalence principle. The simplest way to see this comes from observing the role of absolute and ideally sharp locality in the role that the equivalence principle plays in classical gravity, in contrast to the large class of qualitatively very severe (though tiny) anomalies for locality that the various known scenarios for spacetime quantization (starting with spacetime noncommutativity for example) provide. Unfortunately, our present level of mastery of the relevant formalisms often falls short of allowing us to investigate the fate of the equivalence principle. Therefore, I will briefly describe one illustrative example of promising attempt to model how spacetime foam could affect the equivalence principle. This is the objective of recent studies, reported in Ref. [263] and references therein, in which spacetime foam is modeled in terms of small fluctuations of the metric on a given background metric.Footnote 34 The analysis of Ref. [263], which also involves an averaging procedure over a finite spacetime scale, ends up motivating the study of a modified Schrödinger equation of the form

$$\left[ {- \left({{1 \over {2m}}} \right)({\delta ^{kl}} + {{\tilde \alpha}^{kl}}){\partial _k}{\partial _l} - m\phi (\vec r)} \right]\psi (t,\vec r) = i{\partial _t}\psi (t,\vec r),$$
(76)

where the tensor kl is a characterization of the spacetime foaminess, and it is natural to consider the tensor \({\tilde m^{kl}}\),

$${({\tilde m^{kl}})^{- 1}} \equiv {1 \over m}({\delta ^{kl}} + {\tilde \alpha ^{kl}}),$$
(77)

as an anomalous inertial mass tensor that depends on the type of particle and on the fluctuation scenario. The particle-dependent rescaling of the inertial mass provides a candidate key manifestation of foam-induced violations of the equivalence principle to be sought experimentally, in ways that are once again exemplified by the COW experiments.

This very recent proposal illustrates a type of path that could be followed to introduce violations of the equivalence principle originating genuinely from spacetime quantization at the Planck scale: one might find a way to describe spacetime foaminess n terms of effects of genuinely Planckian size, and then elaborate the implications of this spacetime foaminess for the equivalence principle. The formalization adopted in Ref. [263] is still too crude to allow such an explicit link between the Planck-scale picture of spacetime foam and the nature and magnitude of the effects, but provides a significant step in that direction.

5 Infrared Quantum-Spacetime Phenomenology

Work on Planck-scale quantum-spacetime phenomenology is a rather recent development, with a significant effort taking place over only little more than a decade. But one can already make a distinction between “traditional” and “novel” quantum-spacetime phenomenology approaches. The proposals I have reviewed in the previous two Sections 3 and 4 cover the scope of the “traditional” approach, considering UV effects that could be relevant for observations in astrophysics and/or in controlled-laboratory experiments. I devote this and the next Section 6 to the “novel” idea that Planck-scale quantization of spacetime could have valuable phenomenological implications in some IR regimes and/or that the tests could rely on cosmology.

Considering that these “novel” areas of quantum-spacetime phenomenology are in a preliminarily exploratory phase I will adopt a lower standard in the selection of topics, meaning that I will even mention some proposals that have not fully established a link to a definite scheme of spacetime quantization and/or have not fully established the availability of sensitivities that could be compellingly linked with the introduction of spacetime quantization at the Planck scale. I will rather rely on an (inevitably subjective) assessment of whether the relevant proposals provide valuable first steps in the direction of establishing in the not-so-distant future robust Planck-scale quantum-spacetime phenomenology.

5.1 IR quantum-spacetime effects and UV/IR mixing

In the long [508, 475], and so far inconclusive, search for quantum gravity and quantum spacetime the main strategy was inspired by the discovery paradigm of the 20th century, the “microscope paradigm” with discovery potential measured in terms of the shortness of the distance scales probed. But recent research has raised the possibility that by quantizing spacetime at the Planck scale one might have not only some new phenomena in a far-UV regime, but also some new phenomena in a “dual” IR regime. Actually, as compellingly stressed in Ref. [176], our present understanding of black-hole thermodynamics, and particularly the scaling SR2 of the entropy of a black hole of radius R, suggests that such effects of “UV/IR mixing” may be inevitable. It is on the basis of apparently robust hypotheses concerning the behavior of quantum gravity in the UV (Planckian) regime that we arrive at this quadratic dependence, which is surprising with respect to what one might expect in particular in quantum field theory, where cubic scaling (SR3) naturally arises. But this feature originating from the UV sector clearly should have its most profound implications in the large-distance/IR regime since the difference between quadratic dependence on the radius and cubic dependence on the radius becomes more and more significant as the radius is increased.Footnote 35

Another argument in favor of UV/IR mixing is found considering a popular intuition for quantum spacetime, which relies on the introduction of an uncertainty principle for spacetime itself (in addition to the Heisenberg one, which acts in phase space). The link with UV/IR mixing can be already seen simply by considering a principle of the form \(\delta x\delta y \geq \lambda _\ast^2\) for spatial coordinates, with λ* plausibly on the order of the Planck length. This type of uncertainty relation would evidently imply that small uncertainty in x should require large uncertainty in y, and this suggests a link between probing short distance scales (small δx) and probing large distance scales (large δy).

For this last point we have more than general arguments: computations in a noncommutative spacetime compatible with this sort of uncertainty relations, the “canonical spacetime”, with non-commutativity of coordinates governed by [x μ , x ν ] = μν , have found explicit manifestations of UV/IR mixing. This is particularly evident when analyzing mass renormalization within the most popular formalization of quantum field theories in such canonical noncommutative spacetimes. At one loop one finds terms in mass renormalization of the form [213, 516, 397] (for a “Φ4 scalar field theory”)

$$\Delta _{{m^2}}^{{\rm{renorm}}} = {1 \over {32}}{{{g^2}\Lambda _{{\rm{eff}}}^2} \over {{\pi ^2}}} - {1 \over {32}}{{{g^2}{m^2}} \over {{\pi ^2}}}\log {{\Lambda _{{\rm{eff}}}^2} \over {{m^2}}} + \mathcal{O}({g^4}),$$
(78)

where Λeff is a peculiar cutoff that can be expressed in terms of a standard UV cutoff Λ, the “noncommutativity matrix” θ μν and the momentum q μ of the particle as follows

$$\Lambda _{{\rm{eff}}}^2 = {1 \over {{\Lambda ^{- 2}} + {q_\rho}{{({\theta ^2})}^{\rho \sigma}}{q_\sigma}}}.$$
(79)

Removing the cutoff Λ (Λ → ∞) one is left with \(\Lambda _{{\rm{eff}}}^2 = 1/[{q_\rho}{({\theta ^2})^{\rho \sigma}}{q_\sigma}]\), so that

$$\Delta _{{m^2}}^{{\rm{renorm}}}\sim {{{g^2}} \over {{q_\rho}{{({\theta ^2})}^{\rho \sigma}}{q_\sigma}}} + {g^2}{m^2}\log [{m^2}{q_\rho}{({\theta ^2})^{\rho \sigma}}{q_\sigma}].$$
(80)

These power-law and logarithmic IR features are the result of the UV implications of noncommutativity, which manifest themselves in a rearrangement of the renormalization procedure [213, 516, 397]. In general, the presence of such sharp features in the IR may be of some concern, since they have not (yet) been observed. And these concerns are more serious in the cases where these features are sharpest. However, it should be noticed that different choices of the matrix θ μν produce very different types of IR behavior, and it is well established that in the presence (at least in the UV sector) of supersymmetry only the logarithmic IR features survive (the power-law corrections are removed by one of the standard supersymmetry-induced cancellation mechanisms). The least virulent IR scenario is obtained by assuming the presence of UV supersymmetry and choosing a “light-like” noncommutativity matrix [19, 90] (θ μν θμν = ϵ μvρσ θμvθρσ = 0), so that the main IR feature is a modification of the on-shell relation of the form

$${m^2} \simeq {E^2} - {p^2} + {\chi _\theta}{m^2}\log \left({{{E + \vec p \cdot {{\hat u}_\theta}} \over m}} \right).$$
(81)

The unit vector û θ describes a preferential direction [19] determined by the matrix θ μν , while the dimensionless parameter χ θ also allows for an expected [397] dependence of the magnitude of the effect on the specific particle under study: since the IR feature is found in the renormalization procedure, and this in turn has an obvious dependence on the interactions of a given field with other fields in the theory, the coefficient of the logarithmic IR correction has different value for different fields.

Note that in the IR regime (small p) one can rewrite (81) as follows

$${m^2} \simeq {E^2} - {p^2} + \chi m\vec p \cdot {\hat u_\theta},$$
(82)

so that the effect ultimately amounts to a correction that is linear in momentum. Clearly, this is a scenario in which the IR implications of UV/IR mixing are particularly soft.

Interestingly, canonical noncommutativity is not the only quantum-spacetime proposal that can motivate the study of UV/IR mixing. This is suggested by the perspective on the semi-classical limit of LQG that provided motivation for the quantum-spacetime model of Refs. [33, 34], that also inspired the models considered in Refs. [154, 155, 69]. In this LQG-inspired scenario one finds [33, 34] modifications of the dispersion relation that are linear in momentum in the IR regime, and this has motivated a phenomenology basedFootnote 36 on the IR dispersion relation [154, 155, 69]

$${m^2} \simeq {E^2} - {p^2} + {\chi _{\hat p}}\;m\;p,$$
(83)

where \({X_{\hat p}}\) is a phenomenological parameterFootnote 37 analogous to χ θ .

5.2 A simple model with soft UV/IR mixing and precision Lamb-shift measurements

The long-wavelength behavior of the two scenarios for “soft UV/IR mixing” summarized here in Eq. (82) and Eq. (83) evidently differ only because of the fact that invariance under spatial rotations (lost in Eq. (82)) is preserved by the scenario described in Eq. (83). Therefore, one could simultaneously consider the two scenarios, by observing that the characterization of Eq. (82) in terms of χ θ and û θ is applicable to the scenario of Eq. (83) by replacing û θ with \(\hat p \equiv \vec p/p\) and replacing χ θ with \({X_{\hat p}}\). However, in light of the limited scope of my review of results on “soft UV/IR mixing”, I shall be satisfied with a simplified description, assuming space-rotation invariance and limiting my focus to the effects of dispersion relations of the form

$${m^2} \simeq {E^2} - {p^2} + \xi {{{m^2}} \over {{E_p}}}p,$$
(84)

where I also introduced a change of definition of the dimensionless coefficient, rescaling it in a way that might be relevant for connecting the IR effects with the Planck scale (χξm/E p , which provides no loss of generality if ξ is allowed to be particle dependent).

The phenomenology of models such as this requires a complete change of strategy with respect to the phenomenology of quantum-spacetime UV effects that I discussed in previous Sections 3 and 4 of this review. Whereas the typical search for those UV effects relied on low-precision high-energy data, for the type of IR effects that I am now considering the best options come from high-precision low-energy data. A first example of this was given in Ref. [155], most notably with a (however brief) discussion of how a dispersion relation of type (84) could be relevant for Lamb-shift measurements. Indeed, assuming Eq. (84) holds for the electron, then one should have a modification of the energy levels of the hydrogen atom. And in light of the high precision of certain Lamb shift measurements (which Ref. [155] assesses as being better than one part in 105, see also, e.g., Refs. [328, 546]) one can use this observation to place valuable limits on parameters such as ξ (and χ) for the electron.

5.3 Soft UV/IR mixing and atom-recoil experiments

Evidently the ansatz (84) is such that if particles of different mass had the same valueFootnote 38 of ξ then the effect would be seen more easily for heavier (more massive) types of particles.

I find particularly striking the case of measurements of the recoil of cesium (and rubidium) atoms. For cesium one would assume, following Eq. (84), that

$${m^2} \simeq {E^2} - {p^2} + {\xi _{Cs}}{m^2}{p \over {{M_p}}},$$
(85)

where ξ Cs is the ξ parameter for the case of cesium atoms.

The measurement strategy we proposed in Ref. [69] for testing Eq. (85) with atoms is applicable to measurements of the “recoil frequency” of atoms with experimental setups involving one or more “two-photon Raman transitions” [548]. The strategy of the analysis is best described by setting aside initially the possibility of Planck-scale effects, and looking at the recoil of an atom in a two-photon Raman transition from the perspective adopted in Ref. [548], which provides a convenient starting point for the Planck-scale generalization that is of interest here. One can impart momentum to an atom through a process involving absorption of a photon of frequency ν and (stimulated) emission, in the opposite direction, of a photon of frequency ν′. The frequency ν is computed taking into account a resonance frequency ν* of the atom and the momentum the atom acquires, recoiling upon absorption of the photon: νν* + (* + p)2/(2m) − p2/(2m), where m is the mass of the atom (e.g., m Cs ≃ 124 GeV for cesium), and p is its initial momentum. The emission of the photon of frequency ν′ must be such as to de-excite the atom and impart to it additional momentum: ν′ + (2* + p)2/(2m) ≃ ν* + (* + p)2/(2m). Through this analysis one establishes that by measuring Δννν′, in cases in which ν* and p can be accurately determined, one actually measures h/m for the atoms:

$${{\Delta \nu} \over {2{\nu _\ast}({\nu _\ast} + p/h)}} = {h \over m}.$$
(86)

This result has been confirmed experimentally with remarkable accuracy. A powerful way to illustrate this success is provided by comparing the results of atom-recoil measurements of Δν/[ν*(ν* + p/h)] and of measurements [277] of α2, the square of the fine structure constant. α2 can be expressed in terms of the mass m of any given particle [548] through the Rydberg constant, R, and the mass of the electron, m e , in the following way [548]: \({\alpha ^2} = 2{R_\infty}{m \over {{m_e}}}{h \over m}\). Therefore, according to Eq. (86) one should have

$${{\Delta \nu} \over {2{\nu _\ast}({\nu _\ast} + p/h)}} = {{{\alpha ^2}} \over {2{R_\infty}}}{{{m_e}} \over {{m_u}}}{{{m_u}} \over m},$$
(87)

where m u is the atomic mass unit and m is the mass of the atoms used in measuring Δν/[ν*(ν* + p/h)]. The outcomes of atom-recoil measurements, such as the ones with cesium reported in Ref. [548], are consistent with Eq. (87) to an accuracy of a few parts in 109. The fact that Eq. (86) has been verified to such a high degree of accuracy proves to be very valuable, since it turns out [69] that modifications of the dispersion relation of type (85) require a modification of Eq. (86). Following Ref. [69] one easily finds

$$\Delta \nu \simeq {{2{\nu _\ast}(h{\nu _\ast} + p)} \over m} + {\xi _{Cs}}{m \over {{M_P}}}{\nu _\ast},$$
(88)

and in turn in place of Eq. (87) one has

$${{\Delta \nu} \over {2{\nu _\ast}({\nu _\ast} + p/h)}}\left[ {1 - {\xi _{{\rm{Cs}}}}\left({{m \over {2{M_P}}}} \right)\left({{m \over {h{\nu _\ast} + p}}} \right)} \right] = {{{\alpha ^2}} \over {2{R_\infty}}}{{{m_e}} \over {{m_u}}}{{{m_u}} \over m}.$$
(89)

This equation has been arranged so that on the left-hand side it is easy to recognize that the small quantum-spacetime effect in this specific context receives a sizable “amplification” by the large hierarchy of energy scales m/(hv* + p), which in typical experiments of the type here of interest can be [548] of order ∼ 109.

This turns out to be just enough to provide the desired “Planck-scale sensitivity”: one easily finds that combining the measurements on cesium reported in Ref. [548] and the determination of α2 reported in Ref. [277], one can establish [69] that ξ Cs = −1.8 ± 2.1.

It is interesting that, besides tests of IR modifications of the dispersion relation, these atom-recoil studies can also be used to investigate possible IR modifications of the law of conservation of momentum. An example of such an analysis is given in Ref. [89].

5.4 Opportunities for Bose-Einstein condensates

The use of atoms in quantum-spacetime phenomenology immediately confronts us with issues that are presently beyond the reach of available theoretical results. A legitimate expectation is that quantum-spacetime effects for atoms could be weaker than for the particles that compose atoms, as a result of the sort of “average-out effects” that one is often expected in the quantum-spacetime literature. This would have to be modeled by introducing an extra suppression factor (a sort of “compositeness factor”) in addition to the Planck-scale suppression that is standard in quantum-spacetime phenomenology. Analyses not making room for such an additional suppression might overestimate the Planck-scale-sensitivity reach of the relevant experiments. On the other hand we are at present not sure whether such compositeness-suppression factors are truly needed, or at least if they are needed in all contexts and in all quantum-spacetime models. For example, it is not unreasonable to imagine that in appropriate quantum-spacetime models, when we achieve the ability to analyze them in detail, we might find that as long as a particle is to be handled as a quantum state (far from its classical limit) then it might be irrelevant for the magnitude of quantum-spacetime effects whether the particle is composite or “fundamental”.

This issue of compositeness will surely gradually take an important role in quantum-spacetime research, but at present it is at a very preliminary stage of investigation, and I shall therefore set it aside. However, do note that if particles composed of a very large number of constituent particles experience Planck-scale effects unsuppressed by their compositeness, then not only atoms but also (and perhaps more powerfully) Bose-Einstein condensates could prove to be a very valuable opportunity for quantum-spacetime phenomenology.

And it is noteworthy that in the recent quantum-spacetime-phenomenology literature there has already been a surge of interest in the possibilities offered by Bose-Einstein condensates, as seen in Refs. [542, 472, 139, 138]. In particular, Refs. [139, 138] study Bose-Einstein condensates adopting a perspective on soft UV/IR mixing that is closely related to the one discussed for atoms in the previous Section 5.3.

5.5 Soft UV/IR mixing and the end point of tritium beta decay

Perhaps the most tempting opportunity for the phenomenology of UV/IR mixing comes from studies of the low-energy beta decay spectrum of tritium, \(^3{\rm{H}}{\rightarrow ^3}{\rm{He +}}{{\rm{e}}^ -} + {\bar v_e}\), which have produced so far some rather puzzling results [545, 370]. It is well understood (see, e.g., Refs. [121, 174]) that these puzzles could be addressed by introducing deformed rules of kinematics. And it is intriguing that studies conducted near the endpoint of tritium beta decay are the only known way to accurately investigate the properties of neutrinos in a non-relativistic (non-ultrarelativistic) regime, where their momenta could be comparable to their (tiny) masses. So, it would seem to be a very natural opportunity for advocating UV/IR mixing as a possible explanation. However, the evidence available so far is not very encouraging for the hope of attributing the magnitude of the reported anomalies to IR effects induced by the Planck scale. Still, it is noteworthy that specifically the simple model for soft UV/IR mixing that I described in the previous Sections 5.2 and 5.3 has just the right structure for producing the sort of anomalies that are being reported, as was first stressed in Ref. [154].

The main point of Ref. [154] is centered on the properties of the function K(E) conventionally used to characterize the Kurie plot of tritium beta decay:

$$K(E) = {\left[ {\int {d{p_\nu}p_\nu ^2\delta (Q - E - {E_\nu})}} \right]^{1/2}},$$
(90)

where Q is the difference between initial and final masses of the process, \(Q \simeq {M_{{3_{\rm{H}}}}} - {M_{{3_{\rm{H}}}_{\rm{e}}}} - {m_e}\) (and, therefore, Q is the sum of the neutrino energy, E ν , and the kinetic energy of the electron, E).

Using standard dispersion relations one finds

$$K(E) = {\left[ {(Q - E)\sqrt {{{(Q - E)}^2} - m_\nu ^2)}} \right]^{1/2}},$$
(91)

which does not fit well with the available data near the endpoint [545, 370]. It was observed in Ref. [154] that instead using a modified dispersion relation of type (84), for negative ξ, one obtains better agreement, but this requires that \(\xi m_v^2/{E_p}\) have a value of a few eV. In turn this implies a valueFootnote 39 of ξ that is extremely large with respect to the natural quantum-spacetime estimate ξ ∼ 1, and as a result the case for a quantum-spacetime interpretation is rather weak at present. Still, this exciting experimental situation deserves to be further pursued: perhaps we are modeling soft UV/IR mixing correctly but we have developed the wrong intuition about the role the Planck scale should play, or perhaps one should look at alternative ways to model UV/IR mixing.

5.6 Non-Keplerian rotation curves from quantum-gravity effects

In addition to precision measurements on particles of peculiarly low momentum, another very clear opportunity for UV-IR mixing is provided by data on the behavior of gravity on very large distance scales. And in that context speculating about new-physics phenomena is fully justified by the observed non-Keplerian features of the rotation curves of galaxies or clusters [183]. These non-Keplerian features are usually interpreted as motivation for introducing dark matter (or other non-quantum-gravity new physics, such as MOND [416]), but, in light of the recent awareness of the possibility of UV/IR mixing, it is legitimate to speculate that they may be at least in part due to quantum-spacetime effects.

The perspective one might adopt in trying to profit from this opportunity is similar to when one works within standard quantum field theories and derives an “effective potential” (usually obtained through the calculation of loop contributions) that corrects the tree-level classical potential.

Interestingly, the type of modifications of dispersion relations that have been motivated by quantum-spacetime research do automatically suggest that the Newtonian potential should receive some corresponding corrections. In fact, the Newtonian potential is produced by a static point source when the field that mediates the force described by the potential has energy-momentum space (inverse) propagator G−1(E, p) = E2p2. In general, if the field that mediates the force has a different propagator, \(G_{def}^{- 1}(E,p)\), the Newtonian potential produced at the spatial point \(\vec r\) by a point-like mass M, located at the origin, is replaced by the potential obtained by computing [283]

$$V(\vec r) = L_p^2M\int {{{{d^3}p} \over {2{\pi ^2}}}{G_{def}}(0,\vec p)\;{e^{i\vec p \cdot \vec r}},}$$
(92)

i.e., the potential is the spatial Fourier transform of the propagator evaluated at E = 0.

A more articulated argument for modifications of the Newton potential at large distances from a quantum-spacetime perspective has been put forward as part of the mentioned research program on “asymptotic safety”. This is done in Ref. [469], which indeed adopts as a working assumption the availability of a quantum field theory of gravity whose underlying degrees of freedom are those of the spacetime metric, defined nonperturbatively as a fundamental, “asymptotically-safe” theory. Obtaining definite predictions for the rotation curves of galaxies or clusters within this formalism is presently well beyond our technical capabilities. However, preliminary studies of the renormalization-group behavior provide encouragement for a certain level of analogy between this theory and non-Abelian Yang-Mills theories, and, relying in part on this analogy, Ref. [469] argued that one could obtain non-Keplerian features from renormalization.

5.7 An aside on gravitational quantum wells

Another opportunity for studies of UV/IR mixing is provided by measurements performed on neutron quantum states in the gravity field of the Earth, such as the striking ones reported in Refs. [428, 429]. I have nothing to report on this that would fit the main focus of this review, concerning Planck-scale quantum pictures of spacetime, but it seemed worth mentioning this nonetheless, especially in light of the fact that this class of low-energy studies (candidates for the investigation of UV/IR mixing) have already been analyzed from the perspective of some quantum spacetimes, even though so far all such studies have introduced spacetime quantization at scales that are very far from the Planck scale (much lower energy scales, much greater distance scales).

Since I am already here diverting from the main theme of the review, I shall be satisfied confining the discussion of quantum-spacetime studies of the gravitational quantum well to the particularly interesting points made in Refs. [118, 102, 483, 137]. The studies in Refs. [118, 102, 483] all assumed “canonical noncommutativity” of spacetime coordinates:

$$[{x_j},{x_k}] = i{\theta _{jk}},\quad [{x_j},{x_0}] = i{\theta _{j0}},$$
(93)

where I separated the space/space noncommutativity (θ jk = 0) from the space/time noncommutativity (θj0 ≠ 0).

And Refs. [118, 102, 483] agree on the fact that pure space/space noncommutativity (θj0 = 0) has no significant implications for the gravitational quantum well. However, Ref. [483] notices that with space/time noncommutativity (θj0 ≠ 0) there are tangible consequences for the gravitational quantum well so that in turn one can use the measurement results of Refs. [428, 429] to put bounds on space/time noncommutativity,Footnote 40 although only at the level θj0 < 10−9 m2 (whereas interest from the Planck-scale-quantum-spacetime side would focus in the neighborhood of θj0 ∼ 10−70 m2).

Refs. [118, 102] make the choice of combining space/space noncommutativity with a noncommutativity of momentum space:

$$[{p_j},{p_k}] = i{\psi _{jk}}.$$
(94)

It then turns out that this noncommutativity of momentum space does tangibly affect the analysis of the gravitational quantum well. So that in turn one can use the measurement results of Refs. [428, 429] to place bounds at the level ψ jk < 10−6 eV2.

Ref. [137] is an example of analysis of the gravitational quantum well not from the viewpoint of spacetime noncommutativity, but rather from the viewpoint of the scheme of spacetime quantization introduced in Refs. [323, 322], which is centered on a modification of the Heisenberg principle

$$[{x_j},{p_k}] = i{\delta _{jk}}(1 + \beta {p^2}).$$
(95)

The parameter β does turn out [137] to affect the analysis of the gravitational quantum well, and using the measurement results of Refs. [428, 429] one can place bounds at the level β < 10−18m2.

6 Quantum-Spacetime Cosmology

In the previous Sections 3, 4, 5, I discussed several opportunities for investigating candidate quantum-spacetime effects through observations in astrophysics and, occasionally, some controlled laboratory setups. However, it is likely that gradually cosmology will acquire more and more weight in the search for manifestations of quantum-spacetime effects. In the earliest stages of evolution of the Universe, the typical energies of particles were much higher than the ones we can presently achieve, and high-energy particles are the ideal probes for the short-distance structure of spacetime. Over these past few years, several studies that could be viewed as preparing the ground for this use of cosmology have been presented in the literature. Most of these proposals do not have the structure and robustness necessary for actual phenomenological analyses, such as the ones setting bounds on the parameters of a given quantum-spacetime picture. But the overall picture emerging from these studies confirms the expectation that cosmology has the potential to be a key player in quantum-spacetime phenomenology.

For a combination of reasons, I shall review this recent literature in an even more sketchy way than other parts of this review. This reflects the fact that I view this area as still at a very early stage of development: we are probably just starting to learn what could be the observable manifestations of the quantization of spacetime in cosmology. Even in cases when the quantum-spacetime side is reasonably well understood, the study of the implications for cosmology, when focused on possible observably-large manifestations, is still in its infancy. Moreover, most of the work done so far in this area does not even invoke a definite role for spacetime quantization, which is the main focus of this review, but rather finds inspiration in generic features of the quantum-gravity problem, or relies on string theory (which, as stressed, is a fully legitimate quantum-gravity candidate, but is one such candidate that, as presently understood, would rather lead us to assume that quantum properties of spacetime are absent/negligible).

In light of these considerations, the list of proposals and ideas that composes this section is not representative of the list of scenarios being considered in quantum-spacetime cosmology. It mainly serves the purpose of offering some illustrative examples of how one might go about proposing a quantum-spacetime-cosmology scenario and giving some strength to my opening remarks foreseeing a great future for quantum-spacetime cosmology.

In the last Section 6.5, I briefly mention some examples of quantum-gravity-cosmology proposals, which in their present formulation do not invoke a role for the quantization of spacetime (but could inspire future reformulations centered on a quantum-spacetime perspective).

6.1 Probing the trans-Planckian problem with modified dispersion relations

In the long run, one of the most significant opportunities for quantum-spacetime phenomenology could be an aspect of quantum-spacetime cosmology: the trans-Planckian problem. Inflation works in such a way that some of the scales that are presently of cosmological interest should have been trans-Planckian scales at the beginning of inflation, and, therefore, cannot be handled satisfactorily without (the correct) quantum gravity [134, 393, 197]. In extrapolating the evolution of cosmological perturbations according to linear theory to very early times, we are implicitly making the assumption that the theory remains perturbative to arbitrarily-high energies. And it is easy to see that the expected new physics at the Planck scale could affect our predictions. For example, if there was a sharp Planck-scale cutoff in the theory, then, if inflation lasts many e-folding, the modes which represent fluctuations on galactic scales today would not be present [134] in the theory since their wavelength would have been smaller than the cutoff length at the beginning of inflation.

While in the long run this might get very exciting, I feel we are at present only at a very early stage of exploration of the potentialities of this opportunity for quantum-spacetime phenomenology. But there is growing awareness of this opportunity and the related literature starts to grow large (see, e.g., Refs. [393, 197, 435, 136, 320, 172, 410, 217, 266, 198, 321, 145, 274, 510, 383, 135], and references therein). Many of these studies [136, 172, 410, 217, 266, 198, 321, 274] have probed the possibility that a short-distance cutoff might leave a trace in cosmology measurements such as the ones conducted on the cosmic microwave background.

Among the scenarios that have so far been considered in relation to the trans-Planckian problem, the ones that are more directly linked with the study of spacetime quantization are those involving Planck-scale modifications of the dispersion relation (see, e.g., Refs. [383, 32, 382]). For example, one may consider the possibility [32] of dispersion relations with a trans-Planckian branch, where energy increases with decreasing momenta, such as

$${\omega ^2} \simeq {k^2} - {\alpha _4}{k^4} + {\alpha _6}{k^6}$$
(96)

for appropriate choices of the parameters α4 and α6. A radiation-dominated Universe with particles governed by such modified dispersion relations ends up being characterized [32] by negative radiation pressure and remarkably may be governed by an inflationary equation of state, even without introducing an inflaton field.

These results (and those of Refs. [248, 318, 437, 273]) establish a connection with previous attempts at replacing inflation by scenarios accommodating departures from Lorentz symmetry, such as the scenario with a time-varying speed of light, introduced in works by Moffat [419] and in works by Albrecht and Magueijo [30] (see also Ref. [104]). By postulating an appropriate time variation of the speed of light one can affect causality in a way that is somewhat analogous to inflation: very distant regions of the Universe, which could have never been in causal contact with a time-independent speed of light, could have been in causal contact at very early times if at those early times the speed of light was much higher than at the present time. As argued in the recent review given in Ref. [385], this alternative to inflation is rather severely constrained but still to be considered a viable alternative to inflation.

6.2 Randomly-fluctuating metrics and the cosmic microwave background

Also relevant for cosmology are the mentioned studies suggesting that spacetime quantization could effectively produce spacetime fuzziness/foam amenable to description in terms of a fluctuating spacetime metric [206, 557, 460] (see also Refs. [133, 525]). In particular one can consider [206] fluctuating spacetime metrics amounting to a fluctuating lightcone. Such fluctuations of the light-cone have implications for the arrival times of signals from distant sources that would result in a broadening of the spectra. It was observed in Ref. [206] that starting with a thermal spectrum one would end up with a slightly different spectrum. This can be summarized in a simple phenomenological formula for spectrum distorsion [206]

$$F(\omega) = {F_0}(\omega)[1 + f(\omega)],$$
(97)

where F0(ω) is the spectrum expected without the light-cone fluctuation effects and f(ω) encodes the corrections due to the lightcone fluctuations. Ref. [206] provides arguments in support of the possibility that the corrections due to lightcone fluctuations could get very large at large frequencies, with f(ω) growing like ω4. As we achieve better and better accuracy in the measurement of possible high-frequency departures from a thermal spectrum for the cosmic microwave background, we should then find evidence of such effects [206].

Ref. [460] also explored the possibility that lightcone fluctuations might have observable implications for the gravitational-wave background. The gravitational-wave background is always emitted much before the cosmic microwave background, but it was found [460] that the flat nature of the gravitational-wave-background spectrum is such that the effects of lightcone fluctuations are negligible.

6.3 Loop quantum cosmology

An area of quantum-spacetime cosmology that is not directly linked to the tools and scenarios considered in other areas of quantum-spacetime phenomenology is the Loop Quantum Cosmology (LQC) [125, 94, 126, 92]. This is a framework for implementing several effects seen to arise for the quantum spacetime of LQG in a cosmological setting. The most popular formulations of LQC are defined on “minisuperspace”, where one quantizes homogeneous spacetimes using the methods of LQG. And one finds that the characteristic discreteness of the LQG spacetime quantization change the dynamics of expanding Universe models. These changes are particularly significant at high densities, giving rise to mechanisms avoiding classical singularities.

And one can also consider the novel quantum-spacetime effects at later stages of the Universe expansion, when densities are lower and the corrections can be treated perturbatively in a gauge-invariant way. This can be done in particular for linear perturbations around spatially flat Friedmann-Robertson-Walker models, and the results are found to be primarily characterized in terms of “inverse-volume corrections”, due to the fact that the quantized densitized triad has a discrete spectrum, with the value zero contained in the spectrum. Essentially one finds that the LQG quantum-spacetime effects can be effectively described in terms of a novel repulsive force [92]. This repulsion can compete with the standard gravitational attraction, and can even become the dominant contribution, thereby evading the singularity, when the curvature is strong.

In Refs. [126, 92, 127], and references therein, readers can find a list of possible signatures of LQC. In my opinion, at present, such tests of predictions of LQC may tell us more about the choice of setup for incorporating the quantum-spacetime effects, rather than providing actual information on the quantum structure of spacetime. But as this novel approach keeps maturing, it may well turn into a key resource for experimentally probing the quantum structure of spacetime.

6.4 Cosmology with running spectral dimensions

As mentioned earlier in this review, several formalisms relevant to the study of the quantum-gravity problem have recently been shown to host the mechanism of running spectral dimensions. The spectral dimension of a spacetime is essentially defined [506] by considering a fictitious diffusion process, with the spectral dimension given in terms of the average return probability for given (fictitious) diffusion time. When the number of spectral dimensions matches the number of Hausdorff dimensions of a spacetime, the return probability depends on diffusion time in a characteristic way that is indeed found in all models for large diffusion times. But at short diffusion times one finds in several studies of interest for quantum-gravity and quantum-spacetime research that the average return probability has properties signaling a number of spectral dimensions smaller than the number of Hausdorff dimensions.

First results of this type were found in studies done within the framework of causal dynamical triangulations[48], naturally working with four Hausdorff dimensions and finding that the behavior at small diffusion times signaled two spectral dimensions. Two spectral dimensions for small diffusion times was then also found in studies inspired by asymptotic safety [360, 467] and studies based on Hořava-Liftschitz gravity [290]. A somewhat different situation is found in studies inspired by spacetime noncommutativity [110, 87, 31] and by spin foams [418], but still giving fewer than four spectral dimensions for small diffusion times (also see Refs. [506, 153]).

These “running spectral dimensions” could have very significant implications for cosmology, as suggested at least intuitively by the definition of spectral dimensions based on the dependence of the return probability on the diffusion time. At present these potentialities are still largely unexplored, but one possibility has been debated in Refs. [424, 505, 425]. Ref. [424], citing as motivation some of the quantum-gravity studies exhibiting running spectral dimensions, proposed that such studies should motivate the search for indirect evidence of the absence of gravitational degrees of freedom in the early Universe. However, Ref. [505] more prudently observed that gravitational degrees of freedom are indeed absent in spacetimes with three or fewer Hausdorff dimensions, but may well be present in spacetimes with four Hausdorff dimensions but with three or fewer spectral dimensions.

6.5 Some other quantum-gravity-cosmology proposals

So far in this section, consistent with the main goals of this review, I have focused on cosmology proposals that are based on (or at least directly linkable to) some theories of quantum spacetime. In this last part of the section, I mention just a few illustrative examples of ideas and proposals that are still based on the quantum-gravity problem, but without invoking any definite quantum properties of spacetime. It is not unlikely that future exploitations of the associated phenomenological opportunities would involve spacetime quantization.

6.5.1 Quantum-gravity-induced vector fields

An active area of quantum-gravity cosmology focuses on Lorentz-violating vector fields. These studies do not have as reference scenarios for spacetime quantization, but they are being linked generically to the opportunities that the quantum-gravity problem provides for the emergence of Lorentz-violating vector fields. Several possible implications are being considered, including the possibility that in the presence of such Lorentz-violating vector fields the Universe might experience a slower rate of expansion for a given matter content (see, e.g., Ref. [159]).

It is also emerging that the implications of such Lorentz-violating vector fields would be rather significant for the cosmic microwave background [368]. In particular, as stressed in other parts of this review, the presence of Lorentz-violating vector fields is often associated with energy-dependent birefringence. And the cosmic microwave background, since its radiation originates from the surface of last scattering, which is the most distant source of light, can be a very powerful opportunity to test anomalous features for the propagation of photons. Several techniques of data analysis have been developed that are capable of constraining the birefringence of photon propagation using cosmic-microwave-background data (see, e.g., Refs.[317, 315, 344, 270] and references therein).

6.5.2 A semiclassical Wheeler-DeWitt-based description of the early Universe

Cosmology is also an arena where some interest is attracted by studies of the semiclassical limit of quantum gravity. These do not invoke a quantum-spacetime picture and may not even rely on any given quantum-gravity proposal. They are rather viewed in analogy [325] with the semiclassical limits of other quantum theories: one can consider, for example, the corrections to the classical Maxwell action described by Heisenberg and Euler (in a pre-QED era) in terms of quantum fluctuations of electrons and positrons, which can now be rederived [325] from QED by integrating out the fermions and expanding in powers of .

These studies of the semiclassical limit of quantum gravity are often centered around the Wheeler-DeWitt equation. While for the development of a full quantum-gravity theory the Wheeler-DeWitt equation has proven to be extremely “cumbersome”, the fact that it is rather intuitively formulated is convenient for setting up a semiclassical approximation (see, e.g., Refs. [326, 447, 409]). A result that can be rather readily analyzed from a phenomenology perspective is the one providing [326] correction terms for the Schrödinger equation, obtained through a formal expansion of the Wheeler-DeWitt equation with respect to powers of the Planck mass. Unsurprisingly the relevant correction terms are far too small to matter in laboratory experiments [326]. However, it is plausible that such a procedure could give rise to observably large effects in the description of the early stages of the evolution of the Universe. In particular, the semiclassical approximation set up in Ref. [326] could be used rather straightforwardly to describe corrections to the Schrödinger equation for higher multipoles on a Friedman background.

6.5.3 No-singularity cosmology from string theory

An interesting string-theory-inspired area of cosmology research revolves around a scenario for singularity avoidance linked to the availability of duality tranformations, which allow one to set up a suitable “pre-Big-Bang” scenario [253, 143, 142]. In this scenario the Universe starts inflating from an initial state characterized by very small curvature and weak interactions. The small-curvature initial state is gravitationally unstable and would naturally evolve [253, 143, 142] into states with higher curvature, until string-size (roughly Planck-scale-size) effects are strong enough to induce a “bounce” into a decreasing-curvature regime. Instead of a conventional hot big bang one would have [253, 143, 142] a “hot big bounce” in which in particular the heating mechanism is provided by the quantum production of particles in the pre-bounce phase characterized by high curvature and strong interactions.

For this string-inspired pre-big-bang scenario several possible observational consequences have been discussed [253, 143, 142], including the one of a stochastic background of gravity waves due to a background of gravitons from the pre-big-bang phase. It appears to be plausible [253, 143, 142] that the magnitude of the associated effects might be within the range of sensitivities of modern gravity-wave interferometers.

7 Quantum-Spacetime Phenomenology Beyond the Standard Setup

Most of the ideas for phenomenology I here reviewed are set up following a common strategy. They reflect the expectation that the characteristic scale of quantum-spacetime effects should be within a few orders of magnitude of the Planck scale, and that it should be possible (for studies conducted at scales much below the Planck scale) to analyze quantum-spacetime effects using an expansion in powers of the Planck length. All this is inspired by analogous strategies that have been very fruitful in other areas of physics: many arguments indicate that the Planck scale is the scale where the current theories break down, and usually the breakdown scale is also the scale that governs the magnitude of the effects of the new needed theory. In the case of quantum-spacetime research the expectation of perturbative effects suppressed by a large scale finds further motivation in at least two observations:

  • The effects we expect from spacetime quantization are rather striking, qualitatively virulent departures from the structure of our current theories. The fact that no trace of such “easily noticeable” effects has ever been seen surely provides further encouragement for the expectation of perturbative effects suppressed by an ultralarge scale.

  • I would list the evidence in favor of grand unification as an even more significant source of additional encouragement for the expectation of perturbative effects suppressed by an ultralarge scale. If that evidence is taken at face value (as, I would argue, we should, at least as a natural working assumption) it suggests that particle physics works well on its own up to a scale of about 10−3 the Planck scale. If quantum-spacetime effects were non-perturbative in ways affecting grand unification or if the scale of spacetime quantization was much lower than the Planck scale, it would then be hard to explain the (preliminary) success of the grand-unification idea.

In light of this, surely quantum-spacetime phenomenologists should continue to focus most of their efforts on applications of the standard strategy, assuming perturbative effects suppressed by a scale in some neighborhood of the Planck scale. However, other scenarios and opportunities should not be completely overlooked. We are clearly presently unable to exclude that the correct quantum picture of spacetime might turn out to be unsuitable to the standard strategy of quantum-spacetime phenomenology. As a way to give some substance to this assessment, I briefly discuss examples of mechanisms that could render ineffective the standard strategy of quantum-spacetime phenomenology.

7.1 A totally different setup with large extra dimensions

Can the scale characteristic of quantum-spacetime effects be much lower than the Planck scale? We surely know at least one mechanism by which the quantum-gravity scale can be much lower than the Planck scale, and therefore quantum-gravity models with spacetime quantization affected by this mechanism would describe quantum-spacetime effects at a relatively low scale. I am thinking of the popular scenarios with large extra dimensions.

Through these scenarios one can achieve a sizeable reduction in the quantum-gravity scale with the introduction of D extra space dimensions [80, 375, 552, 84] of finite size R*. Then the fundamental length scale L D characteristic of quantum gravity in the 3+D+1-dimensional spacetime can be much bigger than the Planck length. The smallness of the Planck length can emerge as the result of the fact that, as deduced from applying Gauss’s law in the 3+D+1-dimensional context, the strength of gravitation at distance scales larger than the size R* of the extra dimensions in the ordinary (infinite-size) 3+1-dimensional spacetime would be proportional to the square-root of the inverse of the volume of the external compactified space multiplied by an appropriate power of L D .

These scenarios need to be tuned rather carefully in order to get a phenomenologically-viable picture. Essentially the only truly appealing possibility is the one of 2 extra dimensions of relatively “large” size, somewhere below millimeter size [84] (perhaps 10−4 or 10−5 meters). There might be other extra dimensions of smaller (possibly Planckian) size, but for the desired phenomenology one needs two and only two extra dimensions of relatively large size; otherwise one finds effects that either violate known experimental facts or are too small to ever be tested. But with these (however contrived) choices one does end up with a phenomenologically-exciting scenario in which the fundamental length scale of quantum gravity L D is somewhere in the neighborhood of the (TeV)−1 length scale, and therefore within the reach of particle-physics experiments (see, e.g., Refs. [260, 258, 207, 82]). Moreover, there are phenomenologically-relevant implications for the behavior of (classical) gravity at submillimeter distances [84, 295].

7.2 The example of hard UV/IR mixing

The large-extra-dimension scenario is an example of inapplicability of the standard setup of quantum-spacetime phenomenology due to the fact that, within that scenario, the characteristic scale of quantum gravity is not the Planck scale. There are also scenarios in which one may still assume that quantum-spacetime effects are fundamentally introduced at the Planck scale, but the standard setup of quantum-spacetime phenomenology is inapplicable because the most characteristic effects are not describable in terms of an expansion in powers of the Planck length.

I have already discussed this possibility in Section 5, devoted to soft UV/IR mixing. However, in that context one could fall back on roughly the standard strategy of quantum-spacetime phenomenology, by looking for an IR scale playing the role of characteristic scale of the IR manifestations of the quantum properties of spacetime. Let me just stress that an even more pervasive revision of the standard strategy of quantum-spacetime phenomenology would be required in the case of hard UV/IR mixing, which might take the form of correction terms with the behavior of inverse powers of momentum. With hard UV/IR mixing one should expect that in certain contexts the departures from known physical laws should be dramatic. The most efficacious tests of this hypothesis might not take the shape of searches of small corrections to standard predictions in ordinary contexts, but rather be based on the identification of those peculiar contexts where the implications of UV/IR mixing are large.

7.3 The possible challenge of not-so-subleading higher-order terms

Some challenges for the standard setup of quantum-spacetime phenomenology may also be present when the effects are genuinely introduced at the Planck scale and there is nothing peculiar about the IR sector. In particular, just because this standard setup is based on a (truncated) expansion in powers of the Planck length, it can happen that the formally sub-leading terms (higher powers of the Planck length), which are usually neglected in leading-order analyses, are actually not really negligible. The fact that experiments suitable for quantum-spacetime phenomenology must host, as I stressed in several points of this review, some ultralarge ordinary-physics dimensionless “amplifiers” could play a role in these concerns: if some mechanism is allowing the tiny leading-order Planck-length correction to be observably large it would not be so surprising to find that the same (or some other) amplifier is also such that some “formally subleading” Planck-length corrections, neglected in the analysis, are significant.

And another possible source of concern can originate from the fact that some of the contexts of interest for quantum-spacetime phenomenology are characterized by several length scales: expansions in powers of the Planck length actually are expansions in powers of some dimensionless quantity obtained dividing the Planck length by a characteristic length scale of the physical context of interest, and some “pathologies” may be encountered if there are several candidate length scales for the expansion.

While I feel that these issues for the power expansion should not be ignored, it is partly reassuring that the only explicit examples we seem to be able to come up with are rather contrived. For example, in order to illustrate the issues connected with the many length scales available in certain contexts of interest for quantum-spacetime phenomenology, I cannot mention anything more appealing than the following ad hoc formulation of a deformation of the speed-energy relation applicable in the “relativistic regime” (Em):

$$v \simeq 1 - {{{m^2}} \over {2{E^2}}} + \eta {L_p}E\left({\tanh \left({{{L_p^2{E^6}} \over {{m^4}}}} \right) - 1} \right).$$
(98)

At low (but still “relativistic”) energies this would fit within a picture that has been much studied from the quantum-spacetime-phenomenology perspective, that of the speed-energy relation v ≃ 1 − m2/(2E2) − ηL p E. But whereas in the relevant literature it is assumed that the term ηL p E, if present, should always be the leading correction, up to particle energies on the order of the Planck scale, E ∼ 1/L p , from Eq. (98) one would find that the correction term is already no longer leading at particle energies on the order of E ∼ (m2/L p )1/3, i.e., well below the Planck scale. Here the point is that Eq. (98) is characterized by two distinct small quantities suitable for the “expansion in powers of L p ”: the quantity L p E and the quantity L p E/m2.

8 Closing Remarks

Clearly, the most significant development of these first few years of quantum-spacetime phenomenology has been our ability to uncover some experimental/observational contexts in which, through appropriate data analyses, we could gain access to effects introduced genuinely at the Planck scale. The compellingness of such instances of genuine Planck-scale sensitivity, which are most simply and clearly illustrated in Section 1.5, should be contrasted to the more frequent case of “dimensional-analysis Planck-scale sensitivities”, which typically involve a description of a plausible quantum-spacetime effect in terms of a dimensionless parameter, estimated arbitrarily as a ratio of the Planck length and some characteristic length scale of the problem.

Looking at the results summarized in this review, different readers, depending on how stringent their criteria for genuine Planck-scale sensitivity, will only recognize one or two examples. Not much, but much better than expected even just 15 years ago. And, as stressed, we do have, at this point, a rather encouraging list of contexts in which, while the availability of genuine Planck-scale sensitivity has still not been fully established, it appears that sensitivity to effects introduced genuinely at the Planck scale could be achieved in a not-so-distant future.

The fact that the development of this phenomenology is proving beneficial for the study of the idea of spacetime quantization is perhaps best testified by the fact that it is already managing to truly affect the directions taken by more formal work on spacetime quantization, especially in the areas of LQG and spacetime noncommutativity. Theorists in these areas follow the developments on the phenomenology side and do their best (the technical challenges they are facing are very severe) to derive results that can be exploited for the opportunities in phenomenology that are being established. In turn the phenomenology takes notice of the developments on the theory side, finding in them new input for enlarging the list of candidate quantum-spacetime effects that one could attempt to investigate experimentally.

The goal of testing/falsifying rigorous theories of spacetime quantization appears to still be beyond our present reach. But while most of the work in quantum-spacetime phenomenology so far has relied on simple-minded test theories describing candidate quantum-spacetime effects, I see first indications of a phase of further maturation of this phenomenology, in which we will actually test/falsify at least the most virulent rigorous formalizations of quantum spacetime. Planck-scale theories formulated in noncommutative versions of Minkowski spacetime are the example where we are presently closer to this goal.

The (however limited) information presently available to us appears to provide a clear invitation to continue to focus most of our efforts in the search for effects describable in terms of a (low-energy) expansion in powers of the Planck length, though other opportunities clearly should not be overlooked. Concerning the type of data on which quantum-spacetime phenomenology can rely, I have attempted to maintain throughout this review some visible separations between different proposals on the basis of whether they concern astrophysics, cosmology or controlled laboratory experiments. It is very clear that astrophysics has so far provided the most fruitful arena, but cosmology has the greatest potential reach (although for the most part this potential has not yet materialized). The role played so far in quantum-spacetime phenomenology by controlled laboratory experiments is rather marginal, but it would be important for the future development of quantum-spacetime phenomenology to find more opportunities for controlled laboratory experiments.