Stanford Encyclopedia of Philosophy
This is a file in the archives of the Stanford Encyclopedia of Philosophy.

Quantum Theory: von Neumann vs. Dirac

First published Tue Jul 27, 2004; substantive revision Mon May 21, 2012

An ongoing debate in the foundations of physics concerns the role of mathematical rigor in theorizing. The contrasting views of von Neumann and Dirac provide interesting and informative insights concerning two sides of this debate. Von Neumann's contributions often emphasize mathematical rigor and Dirac's contributions emphasize pragmatic concerns. The discussion below begins with an assessment of their contributions to the foundations of quantum mechanics. Their contributions to mathematical physics beyond quantum mechanics are then considered, and the focus will be on the influence that these contributions had on subsequent developments in quantum theorizing, particularly with regards to quantum field theory and its foundations. The entry quantum field theory provides an overview of a variety of approaches to developing a quantum theory of fields. The purpose of this article is to provide a more detailed discussion of mathematically rigorous approachs to quantum field theory, as opposed to conventional approaches, such as Lagrangian quantum field theory, which are generally portrayed as being more heuristic in character. The current debate concerning whether Lagrangian quantum field theory or axiomatic quantum field theory should serve as the basis for interpretive analysis is then briefly discussed.


1. Introduction

There are two competing mathematical strategies that are used in connection with physical theory, one emphasizes rigor and the other pragmatics. The pragmatic approach often compromises mathematical rigor, but offers instead expediency of calculation and elegance of expression. A case in point is the notion of an infinitesimal, a non-zero quantity that is smaller than any finite quantity. Infinitesimals were used by Kepler, Galileo, Newton, Leibniz and many others in developing and using their respective physical theories, despite lacking a mathematically rigorous foundation, as Berkeley clearly showed in his famous 1734 treatise The Analyst criticizing infinitesimals. Such criticisms did not prevent various 18th Century mathematicians, scientists, and engineers such as Euler and Lagrange from using infinitesimals to get accurate answers from their calculations. Nevertheless, the pull towards rigor led to the development in the 19th century of the concept of a limit by Cauchy and others, which provided a rigorous mathematical framework that effectively replaced the theory of infinitesimals. A rigorous foundation was eventually provided for infinitesimals by Robinson during the second half of the 20th Century, but infinitesimals are rarely used in contemporary physics. For more on the history of infinitesimals, see the entry on continuity and infinitesimals).

The competing mathematical strategies are manifest in a more recent discussion concerning the mathematical foundations of quantum mechanics. In the preface to von Neumann's treatise (1955) on that topic, he notes that Dirac provides a very elegant and powerful formal framework for quantum mechanics, but complains about the central role in that framework of an “improper function with self-contradictory properties,” which he also characterizes as a “mathematical fiction.” He is referring to the Dirac delta function, which has the following incompatible properties: it is defined over the real line, is zero everywhere except for one point at which it is infinite, and yields unity when integrated over the real line. Von Neumann promotes an alternative framework, which he characterizes as being “just as clear and unified, but without mathematical objections.” He emphasizes that his framework is not merely a refinement of Dirac's; rather, it is a radically different framework that is based on Hilbert's theory of operators.

Dirac is of course fully aware that the delta function is not a well-defined expression. But he is not troubled by this for two reasons. First, as long as one follows the rules governing the delta function (such as using the delta function only under an integral sign, meaning in part not asking the value of a delta function at a given point), then no inconsistencies will arise. Second, the delta function can be eliminated, meaning that it can be replaced with a well-defined mathematical expression. However, the drawback in that case is, according to Dirac, that the substitution leads to a more cumbersome expression that obscures the argument. In short, when pragmatics and rigor lead to the same conclusion, pragmatics trumps rigor due to the resulting simplicity, efficiency, and increase in understanding.

As in the case of the notion of an infinitesimal, the Dirac delta function was eventually given a mathematically rigorous foundation. That was done within Schwartz's theory of distributions, which was later used in developing the notion of a rigged Hilbert space. The theory of distributions was used to provide a mathematical framework for quantum field theory (Wightman 1964). The rigged Hilbert space was used to do so for quantum mechanics (Böhm 1966) and then for quantum field theory (Bogoluliubov et al. 1975).

The complementary approaches, rigor and pragmatics, which are exhibited in the development of quantum mechanics, later came about in a more striking way in connection with the development of quantum electrodynamics (QED) and, more generally, quantum field theory (QFT). The emphasis on rigor emerges in connection with two frameworks, algebraic QFT and Wightman's axiomatic QFT. Algebraic QFT has its roots in the work of von Neumann on operator algebras, which was developed by him in an attempt to generalize the Hilbert space framework. Wightman's axiomatic QFT has its roots in Schwartz's theory of distributions, and it was later developed in the rigged Hilbert space framework. Roughly, the basic distinction between the two approaches is that the algebra of operators is the basic mathematical concept in algebraic QFT, while operator-valued distributions (the quantum analgoues of field quantities) are fundamental in Wightman's axiomatic QFT. It is worth noting that algebraic QFT is generally formulated axiomatically, and that it is just as deserving of the name “axiomatic” QFT. However, that term is often taken to refer specifically to the approach based on operator-valued distributions. To avoid any possible confusion, that approach is referred to here as “Wightman’s axiomatic” QFT. The emphasis on pragmatics arises most notably in Lagrangian QFT, which uses perturbation theory, path integrals, and renormalization techniques. Although some elements of the theory were eventually placed on a firmer mathematical foundation, there are still serious quesitons about its being a fully rigorous approach on a par with algebraic and Wightman's axiomatic QFT. Nevertheless, it has been spectacularly successful in providing numerical results that are exceptionally accurate with respect to experimentally determined quantities, and in making possible expedient calculations that are unrivaled by other approaches.

The two approaches to QFT continue to develop in parallel. Fleming (2002, 135–136) brings this into focus in his discussion of differences between Haag's Local Quantum Physics (1996) and Weinberg's Quantum Field Theory (1995); Haag's book presents algebraic QFT, and Weinberg's book presents Lagrangian QFT. While both books are ostensibly about the same subject, Haag gives a precise formulation of QFT and its mathematical structure, but does not provide any techniques for connecting with experimentally determined quantities, such as scattering cross sections. Weinberg gives a pragmatic formulation that engages with physical intuition and provides heuristics that are important for performing calculations; however, it is not mathematically rigorous. Moreover, there is a number of important topics that are examined in one book while not even mentioned in the other. For example, unitarily inequivalent representations are discussed by Haag, but not by Weinberg. By contrast, Weinberg discusses Feynman's rules for path integrals, which are not mentioned at all by Haag. There is also the issue of demographics. Most particle and experimental physicists will read and study Weinberg's book, but very few will read Haag's book. Because of these differences, Fleming (2002, 136) suggests that one might question whether the two books are really about the same subject. This gives rise to the question whether any formulation of QFT is worthy of philosophical attention to its foundations. In particular, there is a debate between Wallace (2006, 2011) and Fraser (2009, 2011) over whether an interpretation of QFT should be based on the standard textbook treatment of QFT or an axiomatic formulation of QFT.

2. Von Neumann and the Foundations of Quantum Theory

In the late 1920s, von Neumann developed the separable Hilbert space formulation of quantum mechanics, which later became the definitive one (from the standpoint of mathematical rigor, at least). In the mid-1930s, he worked extensively on lattice theory (see the entry on quantum logic), rings of operators, and continuous geometries. Part of his expressed motivation for developing these mathematical theories was to develop an appropriate framework for QFT and a better foundation for quantum mechanics. During this time, he noted two closely related structures, modular lattices and finite type-II factors (a special type of ring of operators), that have what he regarded as desirable features for quantum theory. These observations led to his developing a more general framework, continuous geometries, for quantum theory. Matters did not work out as von Neumann had expected. He soon realized that such geometries must have a transition probability function, if they are to be used to describe quantum mechanical phenomena, and that the resulting structure is not a generalization at all beyond the operator rings that were already available. Moreover, it was determined much later that the type-III factors are the most important type of ring of operators for quantum theory. In addition, a similar verdict was delivered much later with regards to his expectations concerning lattice theory. The lattices that are appropriate for quantum theory are orthomodular — a lattice is orthomodular only if it is modular, but the converse is false. Of the three mathematical theories, it is the rings of operators that have proven to be the most important framework for quantum theory. It is possible to use a ring of operators to model key features of physical systems in a purely abstract, algebraic setting (this is discussed in section 4.1). A related issue concerns whether it is necessary to choose a representation of the ring in a Hilbert space; see Haag and Kastler (1964), Ruetsche (2003), and Kronz and Lupher (2005) for further discussion of this issue. In any case, the separable Hilbert space remains a crucial framework for quantum theory. The simplest examples of separable Hilbert spaces are the finite dimensional ones, in which case the algebra of operators is a type-In factor (n is a positive integer). The operators are n-by-n complex matrices, which are typically used to describe internal degrees of freedom such as spin. Readers wanting to familiarize themselves with these basic exampless should consult the entry on quantum mechanics.

2.1 The Separable Hilbert Space Formulation of Quantum Mechanics

Matrix mechanics and wave mechanics were formulated roughly around the same time between 1925 and 1926. In July 1925, Heisenberg finished his seminal paper “On a Quantum Theoretical Interpretation of Kinematical and Mechanical Relations”. Two months later, Born and Jordan finished their paper, “On Quantum Mechanics”, which is the first rigorous formulation of matrix mechanics. Two months after this, Born, Heisenberg, and Jordan finished “On Quantum Mechanics II”, which is an elaboration of the earlier Born and Jordan paper; it was published in early 1926. These three papers are reprinted in (van der Waerden 1967). Meanwhile, Schrödinger was working on what eventually became his four famous papers on wave mechanics. The first was received by Annalen der Physik in January 1926, the second was received in February, and then the third in May and the fourth in June. All four are reprinted in (Schrödinger 1928).

Schrödinger was the first to raise the question of the relationship between matrix mechanics and wave mechanics in (Schrödinger 1926), which was published in Annalen in spring 1926 between the publication of his second and third papers of the famous four. This paper is also reprinted in (Schrödinger 1928). It contains the germ of a mathematical equivalence proof, but it does not contain a rigorous proof of equivalency: the mathematical framework that Schrödinger associated with wave mechanics is a space of continuous and normalizable functions, which is too small to establish the appropriate relation with matrix mechanics. Shortly thereafter, Dirac and Jordan independently provided a unification of the two frameworks. But their respective approaches required essential use of delta functions, which were suspect from the standpoint of mathematical rigor. In 1927, von Neumann published three papers in Göttinger Nachrichten that placed quantum mechanics on a rigorous mathematical foundation and included a rigorous proof (i.e., without the use of delta functions) of the equivalence of matrix and wave mechanics. These papers are reprinted in (von Neumann 1961–1963, Volume I, Numbers 8–10). In the preface to his famous 1932 treatise on quantum mechanics (von Neumann 1955), which is an elegant summary of the separable Hilbert space formulation of quantum mechanics that he provided in the earlier papers, he acknowledges the simplicity and utility of Dirac's formulation of quantum mechanics, but finds it ultimately unacceptable. He indicates that he cannot endure the use of what could then only be regarded as mathematical fictions. Examples of these fictions include Dirac's assumption that every self-adjoint operator can be put in diagonal form and his use of delta functions, which von Neumann characterizes as “improper functions with self-contradictory properties”. His stated purpose is to formulate a framework for quantum mechanics that is mathematically rigorous.

What follows is a brief sketch of von Neumann's strategy. First, he recognized the mathematical framework of matrix mechanics as what would now be characterized as an infinite dimensional, separable Hilbert space. Here the term “Hilbert space” denotes a complete vector space with an inner product; von Neumann imposed the additional requirement of separability (having a countable basis) in his definition of a Hilbert space. He then attempted to specify a set of functions that would instantiate an (infinite-dimensional) separable Hilbert space and could be identified with Schrödinger's wave mechanics. He began with the space of square-integrable functions on the real line. To satisfy the completeness condition, that all Cauchy sequences of functions converge (in the mean) to some function in that space, he specified that integration must be defined in the manner of Lebesgue. To define an inner product operation, he specified that the set of Lebesgue square-integrable functions must be partitioned into equivalence classes modulo the relation of differing on a set of measure zero. That the elements of the space are equivalence classes of functions rather than functions is sometimes overlooked, and it has interesting ramifications for interpretive investigations. It has been argued in (Kronz 1999), for example, that separable Hilbert space is not a suitable framework for quantum mechanics under Bohm's ontological interpretation (also known as Bohmian mechanics).

2.2 Rings of Operators, Quantum Logics, and Continuous Geometries

In a letter to Birkhoff from 1935, von Neumann says: “I would like to make a confession which may seem immoral: I do not believe in Hilbert space anymore”; the letter is published in (von Neumann, 2005). The confession is indeed startling since it comes from the champion of the separable Hilbert space formulation of quantum mechanics and it is issued just three years after the publication of his famous treatise, the definitive work on the subject. The irony is compounded by the fact that less than two years after his confession to Birkhoff, his mathematical theorizing about the abstract mathematical structure that was to supersede the separable Hilbert space, continuous geometries with a transition probability, turned out not to provide a generalizaton of the separable Hilbert space framework. It is compounded again with interest in that subsequent developments in mathematical physics initiated and developed by von Neumann ultimately served to strengthen the entrenchment of the separable Hilbert space framework in mathematical physics (especially with regards to quantum theory). These matters are explained in more detail in Section 4.1.

Three theoretical developments come together for von Neumann in his theory of continuous geometries during the seven years following 1932: the algebraic approach to quantum mechanics, quantum logics, and rings of operators. By 1934, von Neumann had already made substantial moves towards an algebraic approach to quantum mechanics with the help of Jordan and Wigner — their article, “On an Algebraic Generalization of the Quantum Mechanical Formalism”,  is reprinted in (von Neumann 1961–1963, Vol. II, No. 21). In 1936, he published a second paper on this topic, “On an Algebraic Generalization of the  Quantum Mechanical Formalism (Part I)”, which is reprinted in (von Neumann 1961–1963, Vol. III, No. 9). Neither work was particularly influential, as it turns out. A related paper by von Neumann and Birkhoff, “The Logic of Quantum Mechanics”, was also published in 1936, and it is reprinted in (von Neumann 1961–1963, Vol. IV, No. 7). It was seminal to the development of a sizeable body of literature on quantum logics. It should be noted, however, that this happens only after modularity, a key postulate for von Neumann, is replaced with orthomodularity (a weaker condition). The nature of the shift is clearly explained in (Holland 1970): modularity is in effect a weakening of the distributive laws (limiting their validity to certain selected triples of lattice elements), and orthomodularity is a weakening of modularity (limiting the validity of the distributive laws to an even smaller set of triples of lattice elements). The shift from modularity to orthomodularity was first made in (Loomis 1955). Rapid growth of literature on orthomodular lattices and the foundations of quantum mechanics soon followed. For example, see (Pavicic 1992) for a fairly exhaustive bibliography of quantum logic up to 1990, which has over 1800 entries.

Of substantially greater note for the foundations of quantum theory are six papers by von Neumann (three jointly published with Murray) on rings of operators, which are reprinted in (von Neumann 1961–1963, Vol. III, Nos 2–7). The first two, “On Rings of Operators” and a sequel “On Rings of Operators II”, were published in 1936 and 1937, and they were seminal to the development of the other four. The third, “On Rings of Operators: Reduction Theory”, was written during 1937–1938 but not published until 1949. The fourth, “On Infinite Direct Products”, was published in 1938. The remaining two, “On Rings of Operators III” and “On Rings of Operators IV” were published in 1941 and 1943, respectively. This massive work on rings of operators was very influential and continues to have an impact in pure mathematics, mathematical physics, and the foundations of physics. Rings of operators are now referred to as “von Neumann algebras” following Dixmier (1981), who first referred to them by this name (stating that he did so following a suggestion made to him by Dieudonne) in the introduction to his 1957 treatise on operator algebras (Dixmier 1981).

A von Neumann algebra is a *-subalgebra of the set of bounded operators B(H) on a Hilbert space H that is closed in the weak operator topology. It is usually assumed that the von Neumann algebra contains the identity operator. A *-subalgebra contains the adjoint of every operator in the algebra, where the “*” denotes the adjoint. There are special types of von Neumann algebras that are called “factors”. A von Neumann algebra is a factor, if its center (which is the set of elements that commute with all elements of the algebra) is trivial, meaning that it only contains scalar multiples of the identity element. Moreover, von Neumann showed in his reduction-theory paper that all von Neumann algebras that are not factors can be decomposed as a direct sum (or integral) of factors. There are three mutually exclusive and exhaustive factor types: type-I, type-II, and type-III. Each type has been classified into (mutually exclusive and exhaustive) sub-types: types In (n = 1,2,…,∞), IIn (n = 1,∞), IIIz (0 ≤ z ≤ 1). As mentioned above, type-In correspond to finite dimensional Hilbert spaces, while type-I corresponds to the infinite dimensional separable Hilbert space that provides the rigorous framework for wave and matrix mechanics. Von Neumann and Murray distinguished the subtypes for type-I and type-II, but were not able to do so for the type-III factors. Subtypes were not distinguished for these factors until the 1960s and 1970s — see Chapter 3 of (Sunder 1987) or Chapter 5 of (Connes 1994) for details.

As a result of his earlier work on the foundations of quantum mechanics and his work on quantum logic with Birkhoff, von Neumann came to regard the type-II1 factors as likely to be the most relevant for physics. This is a substantial shift since the most important class of algebra of observables for quantum mechanics was thought at the time to be the set of bounded operators on an infinite-dimensional separable Hilbert space, which is a type-I factor. A brief explanation for this shift is provided below. See the well-informed and lucid account presented in (Rédei 1998) for a much fuller discussion of von Neumann's views on fundamental connections between quantum logic, rings of operators (particularly type-II1 factors), foundations of probability theory, and quantum physics. It is worth noting that von Neumann regarded the type-III factors as a catch-all class for the “pathological” operator algebras; indeed, it took several years after the classificatory scheme was introduced to demonstrate the existence of such factors. It is ironic that the predominant view now seems to be that the type-III factors are the most relevant class for physics (particularly for QFT and quantum statistical mechanics). This point is elaborated further in Section 4.1 after explaining below why von Neumann's program never came to fruition.

In the introduction to the first paper in the series of four entitled “On Rings of Operators”, Murray and von Neumann list two reasons why they are dissatisfied with the separable Hilbert space formulation of quantum mechanics. One has to do with a property of the trace operation, which is the operation appearing in the definition of the probabilities for measurement results (the Born rule), and the other with domain problems that arise for unbounded observable operators. The trace of the identity is infinite when the separable Hilbert space is infinite-dimensional, which means that it is not possible to define a correctly normalized a priori probability for the outcome of an experiment (i.e., a measurement of an observable). By definition, the a priori probability for an experiment is that in which any two distinct outcomes are equally likely. Thus, the probability must be zero for each distinct outcome when there is an infinite number of such outcomes, which can occur if and only if the space is infinite dimensional. It is not clear why von Neumann believed that it is necessary to have an a priori probability for every experiment, especially since von Mises clearly believed that a priori probabilities (“uniform distributions” in his terminology) do not always exist (von Mises 1981, pp. 68 ff.) and von Neumann was influenced substantially by von Mises on the foundations of probability (von Neumann 1955, p. 198 fn.). Later, von Neumann changed the basis for his expressed reason for dissatisfaction with infinite dimensional Hilbert spaces from probabilistic to algebraic considerations (Birkhoff and von Neumann 1936, p. 118); namely, that it violates Hankel's principle of the preservation of formal law, which leads one to try to preserve modularity — a condition that holds in finite-dimensional Hilbert spaces but not in infinite-dimensional Hilbert spaces. The problem with unbounded operators arises from their only being defined on a merely dense subset of the set elements of the space. This means that algebraic operations of unbounded operators (sums and products) cannot be generally defined; for example, it is possible that two unbounded operators A, B are such that the range of B and the domain of A are disjoint, in which case the product AB is meaningless.

The problems mentioned above do not arise for type-In factors, if n < ∞, nor do they arise for type-II1. That is to say, these factor types have a finite trace operation and are not plagued with the domain problems of unbounded operators. Particularly noteworthy is that the lattice of projections of each of these factor types (type-In for n < ∞ and type-II1) is modular. By contrast, the set of bounded operators on an infinite-dimensional separable Hilbert space, a type-I factor, is not modular; rather, it is only orthomodular. These considerations serve to explain why von Neumann regarded the type-II1 factor as the proper generalization of the type-In (n < ∞) for quantum physics rather than the type-I factors. The shift in the literature from modular to orthomodular lattices that was characterized above is in effect a shift back to von Neumann's earlier position (prior to his confession). But, as was already mentioned, it now seems that this was not the best move either.

It was von Neumann's hope that his program for generalizing quantum theory would emerge from a new mathematical structure known as “continuous geometry”. He wanted to use this structure to bring together the three key elements that were mentioned above: the algebraic approach to quantum mechanics, quantum logics, and rings of operators. He sought to forge a strong conceptual link between these elements and thereby provide a proper foundation for generalizing quantum mechanics that does not make essential use of Hilbert space (unlike rings of operators). Unfortunately, it turns out that the class of continuous geometries is too broad for the purposes of axiomatizing quantum mechanics. The class must be suitably restricted to those having a transition probability. It turns out that there is then no substantial generalization beyond the separable Hilbert space framework. An unpublished manuscript that was finished by von Neumann in 1937 was prepared and edited by Israel Halperin, and then published as (von Neumann 1981). A review of the manuscript by Halperin was published in (von Neumann 1961–1963, Vol. IV, No. 16) years before the manuscript itself was published. In that review, Halperin notes the following:

The final result, after 200 pages of deep reasoning is (essentially): every such geometry with transition probability can be identified with the projection geometry of a finite factor in some finite or infinite dimensional Hilbert space (Im or II1). This result indicates that continuous geometries do not provide new useful mathematical descriptions of quantum mechanical phenomena beyond that already available from rings of operators.

This unfortunate development does not, however, completly undermine von Neumann's efforts to generalize quantum mechanics. On the contrary, his work on rings of operators does provide significant light to the way forward. The upshot of subsequent developments is that von Neumann settled on the wrong factor type for the foundations of physics.

3. Dirac and the Foundations of Quantum Theory

Dirac's formal framework for quantum mechanics was very useful and influential despite its lack of mathematical rigor. It was used extensively by physicists and it inspired some powerful mathematical developments in functional analysis. Eventually, mathematicians developed a suitable framework for placing Dirac's formal framework on a firm mathematical foundation, which is known as a rigged Hilbert space (and is also referred to as a Gelfand Triplet). This came about as follows. A rigorous definition of the delta function became possible in distribution theory, which was developed by Schwartz from the mid-1940s to the early 1950s. Distribution theory inspired Gelfand and collaborators during the mid-to-late 1950s to formulate the notion of a rigged Hilbert space, the firm foundation for Dirac's formal framework. This development was facilitated by Grothendiek's notion of a nuclear space, which he introduced in the mid-1950s. The rigged Hilbert space formulation of quantum mechanics was then developed independently by Böhm and by Roberts in 1966. Since then, it has been extended to a variety of different contexts in the quantum domain including decay phenomena and the arrow of time. The mathematical developments of Schwartz, Gelfand, and others had a substantial effect on QFT as well. Distribution theory was taken forward by Wightman in developing the axiomatic approach to QFT from the mid-1950s to the mid-1960s. In the late 1960s,  the axiomatic approach was explicitly put into the rigged Hilbert space framework by Bogoliubov and co-workers.

Although these developments were only indirectly influenced by Dirac, by way of the mathematical developments that are associated with his formal approach to quantum mechanics, there are other elements of his work that had a more direct and very substantial impact on the development of QFT. In the 1930s, Dirac developed a Lagrangian formulation of quantum mechanics and applied it to quantum fields (Dirac 1933), and the latter inspired Feynman to develop the path-integral approach to QFT (Feynman 1948). The mathematical foundation for path-integral functionals is still lacking (Rivers 1987, pp, 109–134), though substantial progress has been made (DeWitt-Morette et al. 1979). Despite such shortcomings, it remains the most useful and influential approach to QFT to date. In the 1940s, Dirac developed a form of quantum electrodynamics that involved an indefinite metric (Dirac 1943) — see also (Pauli 1943) in that connection. This had a substantial influence on later developments, first in quantum electrodynamics in the early 1950s with the Gupta-Bluer formalism, and in a variety of QFT models such as vector meson fields and quantum gravity fields by the late 1950s — see Chapter 2 of (Nagy 1966) for examples and references.

3.1 Dirac's Delta Function, Principles, and Bra-Ket Notation

Dirac's attempt to prove the equivalence of matrix mechanics and wave mechanics made essential use of the delta function, as indicated above. The delta function was used by physicists before Dirac, but it became a standard tool in many areas of physics only after Dirac very effectively put it to use in quantum mechanics. It then became widely known by way of his textbook (Dirac 1930), which was based on a series of lectures on quantum mechanics given by Dirac at Cambridge University. This textbook saw three later editions: the second in 1935, the third in 1947, and the fourth in 1958. The fourth edition has been reprinted many times. Its staying power is due, in part, to another innovation that was introduced by Dirac in the third edition, his bra-ket formalism. He first published this formalism in (Dirac 1939), but the formalism did not become widely used until after the publication of the third edition of his book. There is no question that these tools, first the delta function and then the bra-ket notation, were extremely effective for physicists practising and teaching quantum mechanics both with regards to setting up equations and to the performance of calculations. Most quantum mechanics textbooks use delta functions and plane waves, which are key elements of Dirac's formal framework, but they are not included in von Neumann's rigorous mathematical framework for quantum mechanics. Working physicists as well as teachers and students of quantum mechanics often use Dirac's framework because of its simplicity, elegance, power, and relative ease of use. Thus, from the standpoint of pragmatics, Dirac's framework is much preferred over von Neumann's. The notion of a rigged Hilbert space placed Dirac's framework on a firm mathematical foundation.

3.2 The Rigged Hilbert Space Formulation of Quantum Mechanics

Mathematicians worked very hard to provide a rigorous foundation for Dirac's formal framework. One key element was Schwartz's theory of distributions, which was developed between the mid-1940s and the early 1950s (Schwartz 1945; 1950–1951). Another key element, the notion of a nuclear space, was developed by Grothendieck in the mid-1950s (Grothendieck 1955). This notion made possible the generalized-eigenvector decomposition theorem for self-adjoint operators in rigged Hilbert space — for the theorem see (Gelfand and Vilenken 1964, pp. 119–127), and for a brief historical account of the convoluted path leading to it see (Berezanskii 1968, pp. 756–760). The decomposition principle provides a rigorous way to handle observables such as position and momentum in the manner in which they are presented in Dirac's formal framework. These mathematical developments culminated in the early 1960s with Gelfand and Vilenkin's characterization of a structure that they referred to as a rigged Hilbert space (Gelfand and Vilenkin 1964, pp. 103–127). It is unfortunate that their chosen name for this mathematical structure is doubly misleading. First, there is a natural inclination is to regard it as denoting a type of Hilbert space, one that is rigged in some sense, but this inclination must be resisted. Second, the term rigged has an unfortunate connotation of illegitimacy, as in the terms rigged election or rigged roulette table, and this connotation must be dismissed as prejudicial. There is nothing illegitimate about a rigged Hilbert space from the standpoint of mathematical rigor (or any other relevant standpoint). A more appropriate analogy may be drawn using the notion of a rigged ship: the term rigged in this context means fully equipped. But this analogy has its limitations since a rigged ship is a fully equipped ship, but (as the first point indicates) a rigged Hilbert space is not a Hilbert space, though it is generated from a Hilbert space in the manner now to be described.

A rigged Hilbert space is a dual pair of spaces (Φ, Φx) that can generated from a separable Hilbert space Η using a sequence of norms (or semi-norms); the sequence of norms is generated using a nuclear operator (a good approximate meaning is an operator of trace-class, meaning that the trace of the modulus of the operator is finite). In the mathematical theory of topological vector spaces, the space Φ is characterized in technical terms as a nuclear Fréchet space. To say that Φ is a Fréchet space means that it is a complete metric space, and to say that it is nuclear means that it is the projective limit of a sequence of Hilbert spaces in which the associated topologies get rapidly finer with increasing n (i.e., the convergence conditions are increasingly strict); the term nuclear is used because the Hilbert-space topologies are generated using a nuclear operator. In distribution theory, the space Φ is characterized as a test-function space, where a test-function is thought of as a very well-behaved function (being continuous, n-times differentiable, having a bounded domain or at least dropping off exponentially beyond some finite range, etc). Φx is a space of distributions, and it is the topological dual of Φ, meaning that it corresponds to the complete space of continuous linear functionals on Φ. It is also the inductive limit of a sequence of Hilbert spaces in which the topologies get rapidly coarser with increasing n. Because the elements of Φ are so well-behaved, Φx may contain elements that are not so well-behaved, some being singular or improper functions (such as Dirac's elta function). Φ is the topological anti-dual of Φx, meaning that it is the complete set of continuous anti-linear functionals on Φx; it is anti-linear rather than linear because multiplication by a scalar is defined in terms of the scalar's complex conjugate.

It is worth noting that neither Φ nor Φx is a Hilbert space in that each lacks an inner product that induces a metric with respect to which the space is complete, though for each space there is a topology with respect to which the space is complete. Nevertheless, each of them is closely related to the Hilbert space Η from which they are generated: Φ is densely embedded in Η, which in turn is densely embedded in Φx. Two other points are worth noting. First, dual pairs of this sort can also be generated from a pre-Hilbert space, which is a space that has all the features of a Hilbert space except that it is not complete, and doing so has the distinct advantage of avoiding the partitioning of functions into equivalence classes (in the case of functions spaces). The term rigged Hilbert space is typically used broadly to include dual pairs generated from either a Hilbert space or a pre-Hilbert space. Second, the term Gelfand triplet is sometimes used instead of the term rigged Hilbert space, though it refers to the ordered set (Φ, Η, Φx), where Η is the Hilbert space used to generate Φ and Φx.

The dual pair (Φ, Φx) possesses the means to represent important operators for quantum mechanics that are problematic in a separable Hilbert space, particularly the unbounded operators that correspond to the observables position and momentum, and it does so in a particularly effective and unproblematic manner. As already noted, these operators have no eigenvalues or eigenvectors in a separable Hilbert space; moreover, they are only defined on a dense subset of the elements of the space and this leads to domain problems. These undesirable features also motivated von Neumann to seek an alternative to the separable Hilbert space framework for quantum mechanics, as noted above. In a rigged Hilbert space, the operators corresponding to position and momentum can have a complete set of eigenfunctionals (i.e., generalized eigenfunctions). The key result is known as the nuclear spectral theorem (and it is also known as the Gelfand-Maurin theorem). One version of the theorem says that if A is a symmetric linear operator defined on the space Φ and it admits a self-adjoint extension to the Hilbert space H, then A possesses a complete system of eigenfunctionals belonging to the dual space Φx (Gelfand and Shilov 1977, chapter 4). That is to say, provided that the stated condition is satisfied, A can be extended by duality to Φx, its extension Ax is continuous on Φx (in the operator topology in Φx), and Ax satisfies a completeness relation (meaning that it can be decomposed in terms of its eigenfunctionals and their associated eigenvalues). The duality formula for extending A to Φx is <φ|Axκ> = <Aφ|κ>, for all φ∈Φ and for all κ∈Φx. The completeness relation says that for all φ,θ∈Φ:

<Aφ|θ> = ∫v(A) λ<φ|λ><λ|θ>* dμ(λ),

where v(A) is the set of all generalized eigenvalues of Ax (i.e., the set of all scalars λ for which there is λ∈Φx such that <φ| Axλ> = λ<φ|λ> for all φ∈Φ).

The rigged Hilbert space representation of these observables is about as close as one can get to Dirac's elegant and extremely useful formal representation with the added feature of being placed within a mathematically rigorous framework. It should be noted, however, that there is a sense in which it is a proper generalization of Dirac's framework. The rigging (based on the choice of a nuclear operator that determines the test function space) can result in different sets of generalized eigenvalues being assoicated with an operator. For example, the set of (generalized) eigenvalues for the momentum operator (in one dimension) corresponds to the real line, if the space of test functions is the set S of infinitely differentiable functions of x which together with all derivatives vanish faster than any inverse power of x as x goes to infinity, whereas its associated set of eigenvalues is the complex plane, if the space of test functions is the set D of infinitely differentiable functions with compact support (i.e., vanishing outside of a bounded region of the real line). If complex eigenvalues are not desired, then S would be a more appropriate choice than D — see (Nagel 1989) for a brief discussion. But there are situations in which it is desirable for an operator to have complex eigenvalues. This is so, for example, when a system exhibits resonance scattering (a type of decay phenomenon), in which case one would like the Hamiltonian to have complex eigenvalues — see (Böhm & Gadella 1989). (Of course, it is impossible for a self-adjoint operator to have complex eigenvalues in a Hilbert space.)

Soon after the development of the theory of rigged Hilbert spaces by Gelfand and his associates, the theory was used to develop a new formulation of quantum mechanics. This was done independently by Böhm (1966) and Roberts (1966). It was later demonstrated that the rigged Hilbert space formulation of quantum mechanics can handle a broader range of phenomena than the separable Hilbert space formulation. That broader range includes scattering resonances and decay phenomena (Böhm and Gadella 1989), as already noted. Böhm later extended this range to include a quantum mechanical characterization of the arrow of time (Böhm et al. 1997). The Prigogine school developed an alternative characterization of the arrow of time using the rigged Hilbert space formulation of quantum mechanics (Antoniou and Prigogine 1993). Kronz used this formulation to characterize quantum chaos in open quantum systems (Kronz 1998, 2000). Castagnino and Gadella used it to characterize decoherence in closed quantum systems (Castagnino & Gadella 2003).

4 Mathematical Rigor: Two Paths

4.1 Algebraic Quantum Field Theory

In 1943, Gelfand and Neumark published an important paper on an important class of normed rings, which are now known as abstract C*-algebras. Their paper, (Gelfand & Neumark 1943), was influenced by Murray and von Neumann's work on rings of operators, which was discussed in the previous section. In their paper, Gelfand and Neumark focus attention on abstract normed *-rings. They show that any C*-algebra can be given a concrete representation in a Hilbert space (which need not be separable). That is to say, there is an isomorphic mapping of the elements of a C*-algebra into the set of bounded operators of the Hilbert space. Four years later, Segal published a paper (Segal 1947a) that served to complete the work of Gelfand and Neumark by specifying the definitive procedure for constructing concrete (Hilbert space) representations of an abstract C*-algebra. It is called the GNS construction (after Gelfand, Neumark, and Segal). That same year, Segal published an algebraic formulation of quantum mechanics (Segal 1947b), which was substantially influenced by (though deviating somewhat from) von Neumann's algebraic formulation of quantum mechanics (von Neumann 1961–1963, Vol. III, No. 9), which is cited in the previous section. It is worth noting that although C*-algebras satisfy Segal's postulates, the algebra that is specified by his postulates is a more general structure known as a Segal algebra. Every C*-algebra is a Segal algebra, but the converse is false since Segal's postulates do not require an adjoint operation to be defined. If a Segal algebra is isomorphic to the set of all self-adjoint elements of a C*-algebra, then it is a special or exceptional Segal algebra. Although the mathematical theory of Segal algebras has been fairly well developed, a C*-algebra is the most important type of algebra that satisfies Segal's postulates.

The algebraic formulations of quantum mechanics that were developed by von Neumann and Segal did not change the way that quantum mechanics was done. Nevertheless, they did have a substantial impact in two related contexts: QFT and quantum statistical mechanics. The key difference leading to the impact has to do with the domain of applicability. The domain of quantum mechanics consists of finite quantum systems, meaning quantum systems that have a finite number of degrees of freedom. Whereas in QFT and quantum statistical mechanics, the systems of special interest — i.e., quantum fields and particle systems in the thermodynamic limit, respectively — are infinite quantum systems, meaning quantum systems that have an infinite number of degrees of freedom. Dirac was the first to recognize the importance of infinite quantum systems for QFT in (Dirac 1927), which is reprinted in (Schwinger 1958).

Segal was the first to suggest that the beauty and power of the algebraic approach becomes evident when working with an infinite quantum system (Segal 1959, p. 5). The key advantage of the algebraic approach, according to Segal (1959, pp. 5–6), is that one may work in the abstract algebraic setting where it is possible to obtain interacting fields from free fields by an automorphism on the algebra, one that need not be unitarily implementable. Segal notes (1959, p. 6) that von Neumann had a similar idea (that field dynamics are to be expressed as an automorphism on the algebra) in an unpublished manuscript, (von Neumann 1937). Segal notes this advantage in response to a result obtained by (Haag 1955), that field theory representations of free fields are unitarily inequivalent to representations of interacting fields. Haag mentions that von Neumann first discovered ‘different’ (unitarily inequivalent) representations much earlier in (von Neumann 1938). A different way of approaching unitarily equivalent representations, by contrast with Segal's approach, was later presented by Haag and Kastler (1964), who argued that unitarilty inequivalent representations are physically equivalent. Their notion of physical equivalence was based on Fell's mathematical idea of weak equivalence (Fell 1960).

After indicating important similarities between his and von Neumann's approaches to infinite quantum systems, Segal draws an important contrast that serves to give the advantage to his approach over von Neumann's. The key mathematical difference, according to Segal, is that von Neumann was working with a weakly closed ring of operators (meaning that the ring of operators is closed with respect to the weak operator topology), whereas Segal is working with a uniformly closed ring of operators (closed with respect to the uniform topology). It is crucial because it has the following interpretive significance, which rests on operational considerations:

The present intuitive idea is roughly that the only measurable field-theoretic variables are those that can be expressed in terms of a finite number of canonical operators, or uniformly approximated by such; the technical basis is a uniformly closed ring (more exactly, an abstract C*-algebra). The crucial difference between the two varieties of approximation arises from the fact that, in general, weak approximation has only analytical significance, while uniform approximation may be defined operationally, two observables being close if the maximum (spectral) value of the difference is small (Segal 1959, p. 7).

Initially, it appeared that Segal's assessment of the relative merits of von Neumann algebras and C*-algebras with respect to physics was substantiated by a seminal paper, (Haag and Kastler 1964). Among other things, Haag and Kastler introduced the key axioms of the algebraic approach to QFT. They also argued that unitarily inequivalent representations are “physically equivalent” to each other. However, the use of physical equivalence to show that unitarily inequivalent representations are not physically significant has been challenged (see Kronz and Lupher 2005 and Ruetsche 2003). The prominent role of type-III factor von Neumann algebras within the algebraic approach to quantum statistical mechanics and QFT raises further doubts about Segal's assessment.

The algebraic approach has proven most effective in quantum statistical mechanics. It is extremely useful for characterizing many important macroscopic quantum effects including crystallization, ferromagnetism, superfluidity, structural phase transition, Bose-Einstein condensation, and superconductivity. A good introductory presentation is (Sewell 1986), and for a more advanced discussion see (Bratteli & Robinson 1979–1981). In algebraic quantum statistical mechanics, an infinite quantum system is defined by specifying an abstract algebra of observables. A particular state may then be used to specify a concrete representation of the algebra as a set of bounded operators in a Hilbert space. Among the most important types of states that are considered in algebraic statistical mechanics are the equilibrium states, which are often referred to as “KMS states” (since they were first introduced by the physicists Kubo, Martin, and Schwinger). There is a continuum of KMS states since there is at least one KMS state for each possible temperature value τ of the system, for 0 ≤ τ ≤ +∞. Given an automorphism group, each KMS state corresponds to a representation of the algebra of observables that defines the system, and each of these representations is unitarily inequivalent to any other. It turns out that each representation that corresponds to a KMS state is a factor: if τ = 0 then it is a type-I factor, if τ = +∞ then it is a type-II factor, and if 0 < τ < +∞ then it is a type-III factor. Thus, type-III factors play a predominant role in algebraic quantum statistical mechanics.

In algebraic QFT, an algebra of observables is associated with bounded regions of Minkowski spacetime (and unbounded regions including all of spacetime by way of certain limiting operations) that are required to satisfy standard axioms of local structure: isotony, locality, covariance, additivity, positive spectrum, and a unique invariant vacuum state. The resulting set of algebras on Minkowski spacetime that satisfy these axioms is referred to as the net of local algebras. It has been shown that special subsets of the net of local algebras — those corresponding to various types of unbounded spacetime regions such as tubes, monotones (a tube that extends infinitely in one direction only), and wedges — are type-III factors. Of particular interest for the foundations of physics are the algebras that are associated with bounded spacetime regions, such as a double cone (the finite region of intersection of a forward and a backward light cone). As a result of work done over the last thirty years, local algebras of relativistic QFT appear to be type III von Neuman algebras (see Halvorson 2007 749–752 for more details).

One important area for interpretive investigation is the existence of a continuum of unitarily inequivalent representations of an algebra of observables. Attitudes towards unitarily inequivalent representations differ drastically in the philosophical literature. In (Wallace 2006) unitarily inequivalent representations are not considered a foundational problem for QFT, while in (Ruetsche 2003) and (Kronz and Lupher 2005) unitarily inequivalent representations are considered physically significant.

4.2 Wightman's Axiomatic Quantum Field Theory

In the early 1950s, theoretical physicists were inspired to axiomatize QFT. One motivation for axiomatizing a theory, not the one for the case now under discussion, is to express the theory in a completely rigorous form in order to standardize the expression of the theory as a mature conceptual edifice. Another motivation, more akin to the case in point, is to embrace a strategic withdrawal to the foundations to determine how renovation should proceed on a structure that is threatening to collapse due to internal inconsistencies. One then looks for existing piles (fundamental postulates) that penetrate through the quagmire to solid rock, and attempts to drive home others at advantageous locations. Properly supported elements of the superstructure (such as the characterization of free fields, dispersion relations, etc.) may then be distinguished from those that are untrustworthy. The latter need not be razed immediately, and may ultimately glean supportive rigging from components not yet constructed. In short, the theoretician hopes that the axiomatization will effectively separate sense from nonsense, and that this will serve to make possible substantial progress towards the development of a mature theory. Grounding in a rigorous mathematical framework can be an important part of the exercise, and that was a key aspect of the axiomatization of QFT by Wightman.

In the mid-1950s, Schwartz's theory of distributions was used by Wightman (1956) to develop an abstract formulation of QFT, which later came to be known known as axiomatic quantum field theory. Mature statements of this formulation are presented in (Wightman and Gårding 1964) and in (Streater and Wightman 1964). It was further refined in the late 1960s by Bogoliubov, who explicitly placed axiomatic QFT in the rigged Hilbert space framework (Bogoliubov et al. 1975, p. 256). It is by now standard within the axiomatic approach to put forth the following six postulates: spectral condition (there are no negative energies or imaginary masses), vacuum state (it exists and is unique), domain axiom for fields (quantum fields correspond to operator-valued distributions), transformation law (unitary representation in the field-operator (and state) space of the restricted inhomogeneous Lorentz group — “restricted” means inversions are excluded, and “inhomogeneous” means that translations are included), local commutativity (field measurements at spacelike separated regions do not disturb one another), asymptotic completeness (the scattering matrix is unitary — this assumption is sometimes weakened to cyclicity of the vacuum state with respect to the polynomial algebra of free fields). Rigged Hilbert space entered the axiomatic framework by way of the domain axiom, so this axiom will be discussed in more detail below.

In classical physics, a field is is characterized as a scalar- (or vector- or tensor-) valued function φ(x) on a domain that corresponds to some subset of spacetime points. In QFT, a field is characterized by means of an operator rather than a function. A field operator may be obtained from a classical field function by quantizing the function in the canonical manner — cf. (Mandl 1959, pp. 1–17). For convenience, the field operator associated with φ(x) is denoted below by the same expression (since the discussion below only concerns field operators). Field operators that are relevant for QFT are too singular to be regarded as realistic, so they are smoothed out over their respective domains using elements of a space of well-behaved functions known as test functions. There are many different test-functions spaces (Gelfand and Shilov 1977, Chapter 4). At first, the test-function space of choice for axiomatic QFT was the Schwartz space Σ, the space of functions whose elements have partial derivatives of all orders at each point and such that each function and its derivatives decreases faster than x-n for any n∈Ν as x→∞. It was later determined that some realistic models require the use of other test-function spaces. The smoothed field operators φ[f ] for f ∈Σ are known as quantum field operators, and they are defined as follows

φ[f ] = ∫ d4x f (x)φ(x).

The integral (over the domain of the field opertor) of the product of the test function f (x) and the field operator φ(x) serves to “smooth out” the field operator over its domain; a more colloquial description is that the field is “smeared out” over space or spacetime. It is postulated within the axiomatic approach that a quantum field operator φ[f ] may be represented as an unbounded operator on a separable Hilbert space Η, and that {φ[f ]: f ∈Σ} (the set of smoothed field operators associated with φ(x)) has a dense domain Ω in Η. The smoothed field operators are often referred to as operator-valued distributions, and this means that for every Φ,Ψ∈Ω there is an element of the space of distributions Σx, the topological dual of Σ, that may be equated to the expression <Φ|φ[ ]|Ψ>. If Ω’ denotes the set of functions obtained by applying all polynomials of elements of {φ[f ]: f ∈Σ} onto the unique vaucuum state, then the axioms mentioned above entail that Ω’ is dense in Η (asymptotic completeness) and that Ω’⊂Ω (domain axiom). The elements of Ω correspond to possible states of the elements of {φ[f ]: f ∈Σ}. Though only one field has been considered thus far, the formalism is easily generalizable to a countable number of fields with an associated set of countably indexed field operators φk(x) — cf. (Streater and Wightman 1964).

As noted earlier, the appropriateness of the rigged Hilbert space framework enters by way of the domain axiom. Concerning that axiom, Wightman says the following (in the notation intoduced above, which differs slightly from that used by Wightman).

At a more advanced stage in the theory it is likely that one would want to introduce a topology into Ω such that φ[f ] becomes a continuous mapping of Ω into Ω. It is likely that this topology has to be rather strong. We want to emphasize that so far we have only required that <Φ|φ[f ]|Ψ> be continuous in f for Φ,Ψ fixed; continuity in the pair Φ,Ψ cannot be expected before we put a suitable strong topology on Ω (Wightman and Gårding 1964, p. 137).

In (Bogoliubov et al. 1975, p. 256), a topology is introduced to serve this role, though it is introduced on Ω’ rather than on Ω. Shortly thereafter, they assert that it is not hard to show that Ω’ is a complete nuclear space with respect to this topology. This serves to justify a claim they make earlier in their treatise:

… it is precisely the consideration of the triplet of spaces Ω⊂Η⊂Ω* which give a natural basis for both the construction of a general theory of linear operators and the correct statement of certain problems of quantum field theory (Bogoliubov et al. 1975, p. 34).

Note that they refer to the triplet Ω⊂Η⊂Ω* as a rigged Hilbert space. In the terminology introduced above, they refer in effect to the Gelfand triplet (Ω, Η, Ωx ) or (equivalently) the associated rigged Hilbert space (Ω, Ωx) .

Finally, it is worth mentioning that the status of the field in algebraic QFT differs from that in Wightman's axiomatic QFT. In both approaches, a field is an abstract system having an infinite number of degrees of freedom. Sub-atomic quantum particles are field effects that appear in special circumstances. In algebraic QFT, there is a further abstraction: the most fundamental entities are the elements of the algebra of local (and quasi-local) observables, and the field is a derived notion. The term local means bounded within a finite spacetime region, and an observable is not regarded as a property belonging to an entity other than the spacetime region itself. The term quasi-local is used to indicate that we take the union of all bounded spacetime regions. In short, the algebraic approach focuses on local (or quasi-local) observables and treats the notion of a field as a derivative notion; whereas the axiomatic approach (as characterized just above) regards the field concept as the fundamental notion. Indeed, it is common practice for proponents of the algebraic approach to distance themselves from the field notion by referring to their theory as “local quantum physics”. The two approaches are mutually complementary — they have have developed in parallel and have influenced each other by analogy (Wightman 1976). For a discussion of the close connections between these two approaches, see (Haag 1996, p. 106).

5 Philosophical Issues

Lagrangian QFT is our most empirically well-confirmed physical theory. At the same time, it excels with regards to expediency of calculations, provides useful intuitive understanding, and is closer to the actual practice of physicists with regards to applying the theory to make predictions. However, it is also subject to a number of criticisms from the standpoint of mathematical rigor. Mathematically rigorous formulations of QFT, such as Wightman's axiomatic QFT and algebraic QFT, provide clear conceptual frameworks within which precise questions and answers to interpretational issues can be formulated. However, they are sorely lacking with regards to facilitating the derivation of empirical consequences. This unhappy situation gives rise to a serious conundrum for those who are interested in the philosophical foundations of QFT, meaning those who are looking for ontological and epistemological insights. It is not clear which QFT framework should be the focus of these foundational efforts.

One view is that these two approaches to QFT, the rigorous and the pragmatic, are rival research programs. Those engaged in foundational efforts in QFT need to reach a consensus, a group choice, concerning which approach should be chosen for these efforts. There are proponents on both sides of this issue. Fraser (2009) argues that the interpretation of QFT should be based on the mathematically rigorous approaches; she focuses on constructive field theory, a branch of Wightman's axiomatic formulation of QFT. By contrast, Wallace (2006) argues that an interpretation of QFT should be based on what he calls “Lagrangian” QFT, which are close associates with what is characterized above as the pragmatic approach. An alternative view is that there is no need to form a consensus; that both approaches are worthy of research into their philosophical foundations.

Bibliography

Academic Tools

sep man icon How to cite this entry.
sep man icon Preview the PDF version of this entry at the Friends of the SEP Society.
inpho icon Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO).
phil papers icon Enhanced bibliography for this entry at PhilPapers, with links to its database.

Other Internet Resources

[Please contact the author with suggestions.]

Related Entries

quantum mechanics | quantum mechanics: Bohmian mechanics | quantum mechanics: the role of decoherence in | quantum theory: quantum field theory | quantum theory: quantum logic and probability theory